Rethinking AI in journalism with global cooperation

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant multistakeholder session spotlighted the ethical dilemmas of AI in journalism and digital content. The event was hosted by R&W Media and introduced the Haarlem Declaration, a global initiative to promote responsible AI practices in digital media.

Central to the discussion was unveiling an ‘ethical AI checklist,’ designed to help organisations uphold human rights, transparency, and environmental responsibility while navigating AI’s expanding role in content creation. Speakers emphasised a people-centred approach to AI, advocating for tools that support rather than replace human decision-making.

Ernst Noorman, the Dutch Ambassador for Cyber Affairs, called for AI policies rooted in international human rights law, highlighting Europe’s Digital Services and AI Acts as potential models. Meanwhile, grassroots organisations from the Global South shared real-world challenges, including algorithmic bias, language exclusions, and environmental impacts.

Taysir Mathlouthi of Hamleh detailed efforts to build localised AI models in Arabic and Hebrew, while Nepal’s Yuva organisation, represented by Sanskriti Panday, explained how small NGOs balance ethical use of generative tools like ChatGPT with limited resources. The Global Forum for Media Development’s Laura Becana Ball introduced the Journalism Cloud Alliance, a collective aimed at making AI tools more accessible and affordable for newsrooms.

Despite enthusiasm, participants acknowledged hurdles such as checklist fatigue, lack of capacity, and the need for AI literacy training. Still, there was a shared sense of urgency and optimism, with the consensus that ethical frameworks must be embedded from the outset of AI development and not bolted on as an afterthought.

In closing, organisers invited civil society and media groups to endorse the Harlem Declaration and co-create practical tools for ethical AI governance. While challenges remain, the forum set a clear agenda: ethical AI in media must be inclusive, accountable, and co-designed by those most affected by its implementation.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Lawmakers at IGF 2025 call for global digital safeguards

At the Internet Governance Forum (IGF) 2025 in Norway, a high‑level parliamentary roundtable convened global lawmakers to tackle the pressing challenge of digital threats to democracy. Led by moderator Nikolis Smith, the discussion included Martin Chungong, Secretary‑General of the Inter‑Parliamentary Union (via video), and MPs from Norway, Kenya, California, Barbados, and Tajikistan. The central concern was how AI, disinformation, deepfakes, and digital inequality jeopardise truth, electoral integrity, and public trust.

Grunde Almeland, Member of the Norwegian Parliament, warned: ‘Truth is becoming less relevant … it’s hard and harder to pierce [confirmation‑bias] bubbles with factual debate and … facts.’ He championed strong, independent media, noting Norway’s success as “number one on the press freedom index” due to its editorial independence and extensive public funding. Almeland emphasised that legislation exists, but practical implementation and international coordination are key.

Kenyan Senator Catherine Mumma described a comprehensive legal framework—including cybercrime, data protection, and media acts—but admitted gaps in tackling misinformation. ‘We don’t have a law that specifically addresses misinformation and disinformation,’ she said, adding that social‑media rumours ‘[sometimes escalate] to violence’ especially around elections. Mumma called for balanced regulation that safeguards innovation, human rights, and investment in digital infrastructure and inclusion.

California Assembly Member Rebecca Bauer‑Kahn outlined her state’s trailblazing privacy and AI regulations. She highlighted a new law mandating watermarking of AI‑generated content and requiring political‑advert disclosures, although these face legal challenges as potentially ‘forced speech.’ Bauer‑Kahn stressed the need for ‘technology for good,’ including funding universities to develop watermarking and authentication tools—like Adobe’s system for verifying official content—emphasising that visual transparency restores trust.

Barbados MP Marsha Caddle recounted a recent deepfake falsely attributed to her prime minister, saying it risked ‘put[ting] at risk … global engagement.’ She promoted democratic literacy and transparency, explaining that parliamentary meetings are broadcast live to encourage public trust. She also praised local tech platforms such as Zindi in Africa, saying they foster home‑grown solutions to combat disinformation.

Tajikistan MP Zafar Alizoda highlighted regional disparities in data protections, noting that while EU citizens benefit from GDPR, users in Central Asia remain vulnerable. He urged platforms to adopt uniform global privacy standards: ‘Global platforms … must improve their policies for all users, regardless of the country of the user.’

Several participants—including John K.J. Kiarie, MP from Kenya—raised the crucial issue of ‘technological dumping,’ whereby wealthy nations and tech giants export harmful practices to vulnerable regions. Kiarie warned: ‘My people will be condemned to digital plantations… just like … slave trade.’ The consensus called for global digital governance treaties akin to nuclear or climate accords, alongside enforceable codes of conduct for Big Tech.

Despite challenges—such as balancing child protection, privacy, and platform regulation—parliamentarians reaffirmed shared goals: strengthening independent media, implementing watermarking and authentication technologies, increasing public literacy, ensuring equitable data protections, and fostering global cooperation. As Grunde Almeland put it: ‘We need to find spaces where we work together internationally… to find this common ground, a common set of rules.’ Their unified message: safeguarding democracy in the digital age demands national resilience and collective global action.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

FC Barcelona documents leaked in ransomware breach

A recent cyberattack on French insurer SMABTP’s Spanish subsidiary, Asefa, has led to the leak of over 200GB of sensitive data, including documents related to FC Barcelona.

The ransomware group Qilin has claimed responsibility for the breach, highlighting the growing threat posed by such actors. With high-profile victims now in the spotlight, the reputational damage could be substantial for Asefa and its clients.

The incident comes amid growing concern among UK small and medium-sized enterprises (SMEs) about cyber threats. According to GlobalData’s UK SME Insurance Survey 2025, more than a quarter of SMEs have been influenced by media reports of cyberattacks when purchasing cyber insurance.

Meanwhile, nearly one in five cited a competitor’s victimisation as a motivating factor.

Over 300 organisations have fallen victim to Qilin in the past year alone, reflecting a broader trend in the rise of AI-enabled cybercrime.

AI allows cybercriminals to refine their methods, making attacks more effective and challenging to detect. As a result, companies are increasingly recognising the importance of robust cybersecurity measures.

With threats escalating, there is an urgent call for insurers to offer more tailored cyber coverage and proactive services. The breach involving FC Barcelona is a stark reminder that no organisation is immune and that better risk assessment and resilience planning are now business essentials.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored AI agents improve work output—at a social cost

AI agents can significantly improve workplace productivity when tailored to individual personality types, according to new research from the Massachusetts Institute of Technology (MIT). However, the study also found that increased efficiency may come at the expense of human social interaction.

Led by Professor Sinan Aral and postdoctoral associate Harang Ju from MIT Sloan School of Management, the research revealed that human workers collaborating with AI agents completed tasks 60% more efficiently. This gain was partly attributed to a 23% reduction in social messages between team members.

The findings come amid a surge in the adoption of AI agents. A recent PwC survey found that 79% of senior executives had implemented AI agents in their organisations, with 66% reporting productivity gains. Agents are used in roles ranging from customer support to executive assistance and data analysis.

Aral and Ju developed a platform called Pairit (formerly MindMeld) to examine how AI affects team dynamics. In one of their experiments, over 2,000 participants were randomly assigned to human-only teams or teams mixed with AI agents. The groups were tasked with creating advertisements for a think tank.

Teams that included AI agents produced more content and higher-quality ad copy, but their human members communicated less, especially regarding emotional and rapport-building messages.

The study also highlighted the importance of matching AI traits to human personalities. For example, conscientious humans worked more effectively with open AI agents, whereas extroverted humans underperformed when paired with highly conscientious AI counterparts.

‘AI traits can complement human personalities to enhance collaboration,’ the researchers noted. However, they stressed that the same AI assistant may not suit everyone.

The insight underpins the launch of their new venture, Pairium AI, which aims to develop agentic AI that adapts to individual work styles. The company promotes its mission as ‘personalising the Agentic Age.’

Ju emphasised the importance of compatibility: ‘You don’t work the same way with all colleagues—AI should adapt in the same way.’

Devanshu Mehrotra, an analyst at Gartner, described the research as groundbreaking. ‘This opens the door to a much deeper conversation about the hyper-customisation of AI in the workplace.’

Looking ahead, Aral and Ju plan to explore how personalised AI can assist in negotiations, customer support, creative writing and coding tasks. Their findings suggest fitting AI to the user may become as critical as managing human team dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Heat action plans in India struggle to match rising urban temperatures

On 11 June, the India Meteorological Department (IMD) issued a red alert for Delhi as temperatures exceeded 45°C, with real-feel levels reaching 54°C.

Despite warnings, many outdoor workers in the informal sector continued working, highlighting challenges in protecting vulnerable populations during heatwaves.

The primary tool in India for managing extreme heat, the Heat Action Plan (HAP), is developed annually by city and state governments. While some regions, such as Ahmedabad and Tamil Nadu, have reported improved outcomes, most HAPs face implementation, funding, coordination, and data availability issues.

A 2023 study found that 95% of HAPs lacked detailed mapping of high-risk areas and vulnerable groups. Experts and non-governmental organisations recommend incorporating Geographic Information Systems (GIS) and remote sensing to improve targeting.

A study by the Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bengaluru found up to 9°C variation in land-surface temperatures within a two-square-kilometre ward, driven by differences in building types and green cover.

Delhi’s 2025 HAP introduced ward-level land surface temperature maps to identify high-risk areas. However, experts note that many datasets are adapted from agricultural monitoring tools and may not offer the spatial resolution needed for urban planning.

Organisations such as SEEDS and Chintan are using AI models like Sunny Lives to assess indoor heat exposure in low-income settlements to address this. The models estimate indoor temperatures and wet-bulb heat stress using data on roof materials and construction types, offering building-level insights.

Researchers argue that future HAPs should operate at the ward level and be supported by local heat vulnerability indexes, allowing for tailored interventions such as adjusted work hours, targeted hydration stations, and heat shelters.

Some announced measures—such as deploying water coolers and establishing day shelters—remain pending. Power outages in some areas also reduce the effectiveness of heat relief efforts.

Only eight Indian states officially classify heatwaves as disasters, limiting access to dedicated funding and emergency response mandates. Heatwaves are not recognised under national disaster legislation, which affects formal policy prioritisation.

Experts emphasise that building long-term heat resilience requires integrating HAPs with broader policy areas such as energy, water, public health, and employment. Several national programmes could support these efforts, but local implementation often suffers from limited awareness of available resources.

As climate risks grow, timely, data-driven, and locally tailored heat response strategies will be key to reducing health and economic impacts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI and the continued importance of cybersecurity fundamentals

The introduction of generative AI (GenAI) is influencing developments in cybersecurity across industries.

AI-powered tools are being integrated into systems such as end point detection and response (EDR) platforms and security operations centres (SOCs), while threat actors are reportedly exploring ways to use GenAI to automate known attack methods.

While GenAI presents new capabilities, common cybersecurity vulnerabilities remain a primary concern. Issues such as outdated patching, misconfigured cloud environments, and limited incident response readiness are still linked to most breaches.

Cybersecurity researchers have noted that GenAI is often used to scale familiar techniques rather than create new attack methods.

Social engineering, privilege escalation, and reconnaissance remain core tactics, with GenAI accelerating their execution. There are also indications that some GenAI systems can be manipulated to reveal sensitive data, particularly when not properly secured or configured.

Security experts recommend maintaining strong foundational practices such as access control, patch management, and configuration audits. These measures remain critical, regardless of the integration of advanced AI tools.

Some organisations may prioritise tool deployment over training, but research suggests that incident response skills are more effective when developed through practical exercises. Traditional awareness programmes may not sufficiently prepare personnel for real-time decision-making.

Some companies implement cyber drills that simulate attacks under realistic conditions to address this. These exercises can help teams practise protocols, identify weaknesses in workflows, and evaluate how systems perform under pressure. Such drills are designed to complement, not replace, other security measures.

Although GenAI is expected to continue shaping the threat landscape, current evidence suggests that most breaches stem from preventable issues. Ongoing training, configuration management, and response planning efforts remain central to organisational resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Microsoft’s collaboration is near breaking point

The once-celebrated partnership between OpenAI and Microsoft is now under severe strain as disputes over control and strategic direction threaten to dismantle their alliance.

OpenAI’s move toward a for-profit model has placed it at odds with Microsoft, which has invested billions and provided exclusive access to Azure infrastructure.

Microsoft’s financial backing and technical involvement have granted it a powerful voice in OpenAI’s operations. However, OpenAI now appears determined to gain independence, even if it risks severing ties with the tech giant.

Negotiations are ongoing, but the growing rift could reshape the trajectory of generative AI development if the collaboration collapses.

Amid tensions, Microsoft evaluates alternative options, including developing AI tools and working with rivals like Meta and xAI.

Such a pivot suggests Microsoft is preparing for a future beyond OpenAI, potentially ending its exclusive access to upcoming models and intellectual property.

A breakdown could have industry-wide repercussions. OpenAI may struggle to secure the estimated $40 billion in fresh funding it seeks, especially without Microsoft’s support.

At the same time, the rivalry could accelerate competition across the AI sector, prompting others to strengthen or redefine their positions in the race for dominance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!