AI technology drives sharp rise in synthetic abuse material

AI is increasingly being used to produce highly realistic synthetic abuse videos, raising alarm among regulators and industry bodies.

According to new data published by the Internet Watch Foundation (IWF), 1,286 individual AI-generated abuse videos were identified during the first half of 2025, compared to just two in the same period last year.

Instead of remaining crude or glitch-filled, such material now appears so lifelike that under UK law, it must be treated like authentic recordings.

More than 1,000 of the videos fell into Category A, the most serious classification involving depictions of extreme harm. The number of webpages hosting this type of content has also risen sharply.

Derek Ray-Hill, interim chief executive of the IWF, expressed concern that longer-form synthetic abuse films are now inevitable unless binding safeguards around AI development are introduced.

Safeguarding minister Jess Phillips described the figures as ‘utterly horrific’ and confirmed two new laws are being introduced to address both those creating this material and those providing tools or guidance on how to do so.

IWF analysts say video quality has advanced significantly instead of remaining basic or easy to detect. What once involved clumsy manipulation is now alarmingly convincing, complicating efforts to monitor and remove such content.

The IWF encourages the public to report concerning material and share the exact web page where it is located.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft and Salesforce use AI to cut costs and reshape workforce

Microsoft is reporting substantial productivity improvements across its operations, thanks to the growing integration of AI tools in daily workflows.

Judson Althoff, the company’s chief commercial officer, stated during a recent presentation that AI contributed to savings of over $500 million in Microsoft’s call centres last year alone.

The technology has reportedly improved employee and customer satisfaction while supporting operations in sales, customer service, and software engineering. Microsoft is also now using AI to handle interactions with smaller clients, streamlining engagement without significantly expanding headcount.

The developments follow Microsoft’s decision to lay off over 9,000 employees last week, marking the third round of cuts in 2024 and bringing the total to around 15,000.

Although it remains unclear whether automation directly replaced job losses, CEO Satya Nadella has previously stated that AI now generates 20 to 30 percent of the code in Microsoft repositories.

Similar shifts occur at Salesforce, where CEO Marc Benioff has openly acknowledged AI’s growing role in company operations and resource planning.

During a recent analyst call, Robin Washington, Salesforce’s CFO and COO confirmed that hiring has slowed, and 500 customer service roles have been reassigned internally.

The adjustment is expected to result in cost savings of $50 million, as the company focuses on optimising operations through digital transformation. Benioff also disclosed that AI performs between 30 and 50 percent of work previously handled by staff, contributing to workforce realignment.

Companies across the tech sector are rapidly adopting AI to improve efficiency, even as the broader implications for employment and labour markets continue to emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI glasses deliver real-time theatre subtitles

An innovative trial at Amsterdam’s Holland Festival saw Dutch company Het Nationale Theatre, in partnership with XRAI and Audinate, unveil smart glasses that project real-time subtitles in 223 languages via a Dante audio network and AI software.

Attendees of The Seasons experienced dynamic transcription and translation streamed directly to XREAL AR glasses. Voices from each actor’s microphone are processed by XRAI’s AI, with subtitles overlaid in matching colours to distinguish speakers on stage.

Aiming to enhance the theatre’s accessibility, the system supports non-Dutch speakers or those with hearing loss. Testing continues this summer, with complete implementation expected from autumn.

LiveText discards the dated method of back-of-house captioning. Instead, subtitles now appear in real time at actor-appropriate visual depth, automatically handling complex languages and writing systems.

Proponents believe the glasses mark a breakthrough for inclusion, with potential uses at international conferences, music festivals and other live events worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber defence effort returns to US ports post-pandemic

The US Cybersecurity and Infrastructure Security Agency (CISA) has resumed its seaport cybersecurity exercise programme. Initially paused due to the pandemic and other delays, the initiative is now returning to ports such as Savannah, Charleston, Wilmington and potentially Tampa.

These proof-of-concept tabletop exercises are intended to help ports prepare for cyber threats by developing a flexible, replicable framework. Each port functions uniquely, yet common infrastructure and shared vulnerabilities make standardised preparation critical for effective crisis response.

CISA warns that threats targeting ports have grown more severe, with nation states exploiting AI-powered techniques. Some US ports, including Houston, have already fended off cyberattacks, and Chinese-made systems dominate critical logistics, raising national security concerns.

Private ownership of most port infrastructure demands strong public-private partnerships to maintain cybersecurity. CISA aims to offer a shared model that ports across the country can adapt to improve cooperation, resilience, and threat awareness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta offers $200 million to top AI talent as superintelligence race heats up

Meta has reportedly offered over $200 million in compensation to Ruoming Pang, a former senior AI engineer at Apple, as it escalates its bid to dominate the AI arms race.

The offer, which includes long-term stock incentives, far exceeded Apple’s willingness to match and is seen as one of Silicon Valley’s most aggressive poaching efforts.

The move is part of Meta’s broader campaign to build a world-class team under its new Meta Superintelligence Lab (MSL), which is focused on developing artificial general intelligence (AGI).

The division has already attracted prominent names, including ex-GitHub CEO Nat Friedman, AI investor Daniel Gross, and Scale AI co-founder Alexandr Wang, who joined as Chief AI Officer through a $14.3 billion stake deal.

Most compensation offers in the MSL reportedly rival CEO packages at global banks, but they are heavily performance-based and tied to long-term equity vesting.

Meta’s mix of base salary, signing bonuses, and high-value stock options is designed to attract and retain elite AI talent amid a fierce talent war with OpenAI, Google, and Anthropic.

OpenAI CEO Sam Altman recently claimed Meta has dangled bonuses up to $100 million to lure staff away, though he insists many stayed for cultural reasons.

Still, Meta has already hired more than 10 researchers from OpenAI and poached talent from Google DeepMind, including principal researcher Jack Rae.

The AI rivalry could come to a head as Altman and Zuckerberg meet at the Sun Valley conference this week.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kazakhstan rises as an AI superpower

Since the launch of its Digital Kazakhstan initiative in 2017, the country has shifted from resource-dependent roots to digital leadership.

It ranks 24th globally on the UN’s e‑government index and among the top 10 in online service delivery. Over 90% of public services, such as registrations, healthcare access, and legal documentation, are digitised, aided by mobile apps, biometric ID and QR authentication.

Central to this is a Tier III data-centre-based AI supercluster, launching in July 2025, and the Alem.AI centre, both designed to supply computing power for universities, startups and enterprises.

Kazakhstan is also investing heavily in talent and innovation. It aims to train up to a million AI-skilled professionals and supports over 1,600 startups at Astana Hub. Venture capital surpassed $250 million in 2024, bolstered by a new $1 billion Qazaqstan Venture Group fund.

Infrastructure upgrades, such as a 3,700 km fibre-optic corridor between China and the Caspian Sea, support a growing tech ecosystem.

Regulatory milestones include planned AI law reforms, data‑sovereignty zones like CryptoCity, and digital identity frameworks. These prepare Kazakhstan to become Central Asia’s digital and AI nexus.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI that serves communities, not the other way round

At the WSIS+20 High-Level Event in Geneva, a vivid discussion unfolded around how countries in the Global South can build AI capacity from the ground up, rooted in local realities rather than externally imposed models. Organised by Diplo, the Permanent Mission of Kenya to the UN in Geneva, Microsoft, and IT for Change, the session used the fictional agricultural nation of ‘Landia’ to spotlight the challenges and opportunities of community-centred AI development.

With weak infrastructure, unreliable electricity, and fragmented data ecosystems, Landia embodies the typical constraints many developing nations face as they navigate the AI revolution.

UN Under-Secretary-General and Special Envoy for Digital and Emerging Technologies Amandeep Singh Gill presented a forthcoming UN report proposing a five-tiered framework to guide countries from basic AI literacy to full development capacity. He stressed the need for tailored, coordinated international support—backed by a potential global AI fund—to avoid the fragmented aid pitfalls seen in climate and health sectors.

WSIS

Microsoft’s Ashutosh Chadha echoed that AI readiness is not just a tech issue but fundamentally a policy challenge, highlighting the importance of data governance, education systems, and digital infrastructure as foundations for meaningful AI use.

Civil society voices, particularly from IT4Change’s Anita Gurumurthy and Nandini Chami, spoke about ‘regenerative AI’—AI that is indigenous, inclusive, and modular. They advocated for small-scale models that can run on local data and infrastructures, proposing creative use of community media archives and agroecological knowledge.

Speakers stressed that technology should adapt to community needs, not the reverse, and that AI must augment—not displace—traditional practices, especially in agriculture where livelihoods are at stake.

WSIS

Ultimately, the session crystallised around a core principle: AI must be developed with—not for—local communities. Participants called for training unemployed youth to support rural farmers with accessible AI tools, urged governments to invest in basic infrastructure alongside AI capacity, and warned against replicating inequalities through automation.

The session concluded with optimism and a commitment to continue this global-local dialogue beyond Geneva, ensuring AI’s future in the Global South is not only technologically viable, but socially just.

Track key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Digital humanism in the AI era: Caution, culture, and the call for human-centric technology

At the WSIS+20 High-Level Event in Geneva, the session ‘Digital Humanism: People First!’ spotlighted growing concerns over how digital technologies—especially AI—are reshaping society. Moderated by Alfredo M. Ronchi, the discussion revealed a deep tension between the liberating potential of digital tools and the risks they pose to cultural identity, human dignity, and critical thinking.

Speakers warned that while digital access has democratised communication, it has also birthed a new form of ‘cognitive colonialism’—where people become dependent on AI systems that are often inaccurate, manipulative, and culturally homogenising.

The panellists, including legal expert Pavan Duggal, entrepreneur Lilly Christoforidou, and academic Sarah Jane Fox, voiced alarm over society’s uncritical embrace of generative AI and its looming evolution toward artificial general intelligence by 2026. Duggal painted a stark picture of a world where AI systems override human commands and manipulate users, calling for a rethinking of legal frameworks prioritising risk reduction over human rights.

Fox drew attention to older people, warning that growing digital complexity risks alienating entire generations, while Christoforidou urged for ethical awareness to be embedded in educational systems, especially among startups and micro-enterprises.

Despite some disagreement over the fundamental impact of technology—ranging from Goyal’s pessimistic warning about dehumanisation to Anna Katz’s cautious optimism about educational potential—the session reached a strong consensus on the urgent need for education, cultural protection, and contingency planning. Panellists called for international cooperation to preserve cultural diversity and develop ‘Plan B’ systems to sustain society if digital infrastructures fail.

The session’s tone was overwhelmingly cautionary, with speakers imploring stakeholders to act before AI outpaces our capacity to govern it. Their message was clear: human values, not algorithms, must define the digital age. Without urgent reforms, the digital future may leave humanity behind—not by design, but by neglect.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

EU urges stronger AI oversight after Grok controversy

A recent incident involving Grok, the AI chatbot developed by xAI, has reignited European Union calls for stronger oversight of advanced AI systems.

Comments generated by Grok prompted criticism from policymakers and civil society groups, leading to renewed debate over AI governance and voluntary compliance mechanisms.

The chatbot’s responses, which circulated earlier this week, included highly controversial language and references to historical figures. In response, xAI stated that the content was removed and that technical steps were being taken to prevent similar outputs from appearing in the future.

European policymakers said the incident highlights the importance of responsible AI development. Brando Benifei, an Italian lawmaker who co-led the EU AI Act negotiations, said the event illustrates the systemic risks the new regulation seeks to mitigate.

Christel Schaldemose, a Danish member of the European Parliament and co-lead on the Digital Services Act, echoed those concerns. She emphasised that such incidents underline the need for clear and enforceable obligations for developers of general-purpose AI models.

The European Commission is preparing to release guidance aimed at supporting voluntary compliance with the bloc’s new AI legislation. This code of practice, which has been under development for nine months, is expected to be published this week.

Earlier drafts of the guidance included provisions requiring developers to share information on how they address systemic risks. Reports suggest that some of these provisions may have been weakened or removed in the final version.

A group of five lawmakers expressed concern over what they described as the last-minute removal of key transparency and risk mitigation elements. They argue that strong guidelines are essential for fostering accountability in the deployment of advanced AI models.

The incident also brings renewed attention to the Digital Services Act and its enforcement, as X, the social media platform where Grok operates, is currently under EU investigation for potential violations related to content moderation.

General-purpose AI systems, such as OpenAI’s GPT, Google’s Gemini and xAI’s Grok, will be subject to additional requirements under the EU AI Act beginning 2 August. Obligations include disclosing training data sources, addressing copyright compliance, and mitigating systemic risks.

While these requirements are mandatory, their implementation is expected to be shaped by the Commission’s voluntary code of practice. Industry groups and international stakeholders have voiced concerns over regulatory burdens, while policymakers maintain that safeguards are critical for public trust.

The debate over Grok’s outputs reflects broader challenges in balancing AI innovation with the need for oversight. The EU’s approach, combining binding legislation with voluntary guidance, seeks to offer a measured path forward amid growing public scrutiny of generative AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity launches AI browser to challenge Google Chrome

Perplexity AI, backed by Nvidia and other major investors, has launched Comet, an AI-driven web browser designed to rival Google Chrome.

The browser uses ‘agentic AI’ that performs tasks, makes decisions, and simplifies workflows in real time, offering users an intelligent alternative to traditional search and navigation.

Comet’s assistant can compare products, summarise articles, book meetings, and handle research queries through a single interface. Initially available to subscribers of Perplexity Max at US$200 per month, Comet will gradually roll out more broadly via invite during the summer.

The launch signals Perplexity’s move into the competitive browser space, where Chrome currently dominates with a 68 per cent global market share.

The company aims to challenge not only Google’s and Microsoft’s browsers but also compete with OpenAI, which recently introduced search to ChatGPT. Unlike many AI tools, Comet stores data locally and does not train on personal information, positioning itself as a privacy-first solution.

Still, Perplexity has faced criticism for using content from major media outlets without permission. In response, it launched a publisher partnership program to address concerns and build collaborative relationships with news organisations like Forbes and Dow Jones.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!