South Korea’s SK Group and AWS team up on AI infrastructure

South Korean conglomerate SK Group has joined forces with Amazon Web Services (AWS) to invest 7 trillion won (approximately $5.1 billion) in building a large-scale AI data centre in Ulsan, South Korea. The project aims to bolster the country’s AI infrastructure over the next 15 years.

According to South Korea’s Ministry of Science and ICT, the facility will begin construction in September 2025 and is expected to become fully operational by early 2029. Once complete, the Ulsan Centre will have a power capacity exceeding 100 megawatts. AWS will contribute $4 billion to the project.

SK Group stated on Sunday that the data centre will support Korea’s AI ambitions by integrating high-speed networks, advanced semiconductors, and efficient energy systems. In a LinkedIn post, SK Group chairman Chey Tae-won said the company is ‘uniquely positioned’ to drive AI innovation.

They highlighted the role of several SK affiliates in the project, including SK Hynix for high-bandwidth memory, SK Telecom and SK Broadband for network operations, and SK Gas and SK Multi Utility for infrastructure and energy.

The initiative is part of SK Group’s broader commitment to AI investment. In 2023, the company pledged to invest 82 trillion won by 2026 in HBM chip development, data centres, and AI-powered services.

The group has also backed AI startups such as Perplexity, Twelve Labs, and Korean LLM developer Upstage. Its chip unit, Sapeon, merged with rival Rebellions last year, creating a company valued at 1.3 trillion won.

Other major Korean players are also ramping up AI efforts. Tech giant Kakao recently announced plans to invest 600 billion won in an AI data centre and partnered with OpenAI to incorporate ChatGPT technology into its services.

The tech industry in South Korea continues to race towards AI dominance, with domestic firms making substantial investments to secure future leadership in AI infrastructure and applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple considers buying Perplexity AI

Apple is reportedly considering the acquisition of Perplexity AI as it attempts to catch up in the fast-moving race for dominance in generative technology.

According to Bloomberg, the discussions involve senior executives, including Eddy Cue and merger head Adrian Perica, who remain at an early stage.

Such a move would significantly shift Apple, which typically avoids large-scale takeovers. However, with investor pressure mounting after an underwhelming developer conference, the tech giant may rethink its traditionally cautious acquisition strategy.

Perplexity has gained prominence for its fast, clear AI chatbot and recently secured funding at a $14 billion valuation.

Should Apple proceed, the acquisition would be the company’s largest ever financially and strategically, potentially transforming its position in AI and reducing its long-standing dependence on Google’s search infrastructure.

Apple’s slow development of Siri and reliance on a $20 billion revenue-sharing deal with Google have left it trailing rivals. With that partnership now under regulatory scrutiny in the US, Apple may view Perplexity as a vital step towards building a more autonomous search and AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Africa reflects on 20 years of WSIS at IGF 2025

At the Internet Governance Forum (IGF) 2025, a high-level session brought together African government officials, private sector leaders, civil society advocates, and international experts to reflect on two decades of the continent’s engagement in the World Summit on the Information Society (WSIS) process. Moderated by Mactar Seck of the UN Economic Commission for Africa, the WSIS+20 Africa review highlighted both remarkable progress and ongoing challenges in digital transformation.

Seck opened the discussion with a snapshot of Africa’s connectivity leap from 2.6% in 2005 to 38% today. Yet, he warned, ‘Cybersecurity costs Africa 10% of its GDP,’ underscoring the urgency of coordinated investment and inclusion. Emphasising multi-stakeholder collaboration, he called for ‘inclusive policy-making across government, private sector, academia and civil society,’ aligned with frameworks such as the AU Digital Strategy and the Global Digital Compact.

Tanzania’s Permanent Secretary detailed the country’s 10-year National Digital Strategic Framework, boasting 92% 3G and 91% 4G coverage and regional infrastructure links. Meanwhile, Benin’s Hon. Adjara presented the Cotonou Declaration and proposed an African Digital Performance Index to monitor broadband, skills, cybersecurity, and inclusion. From the private sector, Jimson Odufuye called for ‘annual WSIS reviews at national level’ and closer alignment with Sustainable Development Goals, stating, “If we cannot measure progress, we cannot reach the SDGs.”

Gender advocate Baratang Pil called for a revision of WSIS action lines to include mandatory gender audits and demanded that ‘30% of national AI and DPI funding go to women-led tech firms.’ Youth representative Louvo Gray stressed the need for $100 billion to close the continent’s digital divide, reminding participants that by 2050, 42% of the world’s youth will be African. Philippe Roux of the UN Emerging Technology Office urged policymakers to focus on implementation over renegotiation: ‘People are not connected because it costs too much — we must address the demand side.’

The panel concluded with a call for enhanced continental cooperation and practical action. As Seck summarised, ‘Africa has the youth, knowledge, and opportunity to lead in the Fourth Industrial Revolution. We must make sure digital inclusion is not a slogan — it must be a shared commitment.’

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Rethinking AI in journalism with global cooperation

At the Internet Governance Forum 2025 in Lillestrøm, Norway, a vibrant multistakeholder session spotlighted the ethical dilemmas of AI in journalism and digital content. The event was hosted by R&W Media and introduced the Haarlem Declaration, a global initiative to promote responsible AI practices in digital media.

Central to the discussion was unveiling an ‘ethical AI checklist,’ designed to help organisations uphold human rights, transparency, and environmental responsibility while navigating AI’s expanding role in content creation. Speakers emphasised a people-centred approach to AI, advocating for tools that support rather than replace human decision-making.

Ernst Noorman, the Dutch Ambassador for Cyber Affairs, called for AI policies rooted in international human rights law, highlighting Europe’s Digital Services and AI Acts as potential models. Meanwhile, grassroots organisations from the Global South shared real-world challenges, including algorithmic bias, language exclusions, and environmental impacts.

Taysir Mathlouthi of Hamleh detailed efforts to build localised AI models in Arabic and Hebrew, while Nepal’s Yuva organisation, represented by Sanskriti Panday, explained how small NGOs balance ethical use of generative tools like ChatGPT with limited resources. The Global Forum for Media Development’s Laura Becana Ball introduced the Journalism Cloud Alliance, a collective aimed at making AI tools more accessible and affordable for newsrooms.

Despite enthusiasm, participants acknowledged hurdles such as checklist fatigue, lack of capacity, and the need for AI literacy training. Still, there was a shared sense of urgency and optimism, with the consensus that ethical frameworks must be embedded from the outset of AI development and not bolted on as an afterthought.

In closing, organisers invited civil society and media groups to endorse the Harlem Declaration and co-create practical tools for ethical AI governance. While challenges remain, the forum set a clear agenda: ethical AI in media must be inclusive, accountable, and co-designed by those most affected by its implementation.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Lawmakers at IGF 2025 call for global digital safeguards

At the Internet Governance Forum (IGF) 2025 in Norway, a high‑level parliamentary roundtable convened global lawmakers to tackle the pressing challenge of digital threats to democracy. Led by moderator Nikolis Smith, the discussion included Martin Chungong, Secretary‑General of the Inter‑Parliamentary Union (via video), and MPs from Norway, Kenya, California, Barbados, and Tajikistan. The central concern was how AI, disinformation, deepfakes, and digital inequality jeopardise truth, electoral integrity, and public trust.

Grunde Almeland, Member of the Norwegian Parliament, warned: ‘Truth is becoming less relevant … it’s hard and harder to pierce [confirmation‑bias] bubbles with factual debate and … facts.’ He championed strong, independent media, noting Norway’s success as “number one on the press freedom index” due to its editorial independence and extensive public funding. Almeland emphasised that legislation exists, but practical implementation and international coordination are key.

Kenyan Senator Catherine Mumma described a comprehensive legal framework—including cybercrime, data protection, and media acts—but admitted gaps in tackling misinformation. ‘We don’t have a law that specifically addresses misinformation and disinformation,’ she said, adding that social‑media rumours ‘[sometimes escalate] to violence’ especially around elections. Mumma called for balanced regulation that safeguards innovation, human rights, and investment in digital infrastructure and inclusion.

California Assembly Member Rebecca Bauer‑Kahn outlined her state’s trailblazing privacy and AI regulations. She highlighted a new law mandating watermarking of AI‑generated content and requiring political‑advert disclosures, although these face legal challenges as potentially ‘forced speech.’ Bauer‑Kahn stressed the need for ‘technology for good,’ including funding universities to develop watermarking and authentication tools—like Adobe’s system for verifying official content—emphasising that visual transparency restores trust.

Barbados MP Marsha Caddle recounted a recent deepfake falsely attributed to her prime minister, saying it risked ‘put[ting] at risk … global engagement.’ She promoted democratic literacy and transparency, explaining that parliamentary meetings are broadcast live to encourage public trust. She also praised local tech platforms such as Zindi in Africa, saying they foster home‑grown solutions to combat disinformation.

Tajikistan MP Zafar Alizoda highlighted regional disparities in data protections, noting that while EU citizens benefit from GDPR, users in Central Asia remain vulnerable. He urged platforms to adopt uniform global privacy standards: ‘Global platforms … must improve their policies for all users, regardless of the country of the user.’

Several participants—including John K.J. Kiarie, MP from Kenya—raised the crucial issue of ‘technological dumping,’ whereby wealthy nations and tech giants export harmful practices to vulnerable regions. Kiarie warned: ‘My people will be condemned to digital plantations… just like … slave trade.’ The consensus called for global digital governance treaties akin to nuclear or climate accords, alongside enforceable codes of conduct for Big Tech.

Despite challenges—such as balancing child protection, privacy, and platform regulation—parliamentarians reaffirmed shared goals: strengthening independent media, implementing watermarking and authentication technologies, increasing public literacy, ensuring equitable data protections, and fostering global cooperation. As Grunde Almeland put it: ‘We need to find spaces where we work together internationally… to find this common ground, a common set of rules.’ Their unified message: safeguarding democracy in the digital age demands national resilience and collective global action.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

FC Barcelona documents leaked in ransomware breach

A recent cyberattack on French insurer SMABTP’s Spanish subsidiary, Asefa, has led to the leak of over 200GB of sensitive data, including documents related to FC Barcelona.

The ransomware group Qilin has claimed responsibility for the breach, highlighting the growing threat posed by such actors. With high-profile victims now in the spotlight, the reputational damage could be substantial for Asefa and its clients.

The incident comes amid growing concern among UK small and medium-sized enterprises (SMEs) about cyber threats. According to GlobalData’s UK SME Insurance Survey 2025, more than a quarter of SMEs have been influenced by media reports of cyberattacks when purchasing cyber insurance.

Meanwhile, nearly one in five cited a competitor’s victimisation as a motivating factor.

Over 300 organisations have fallen victim to Qilin in the past year alone, reflecting a broader trend in the rise of AI-enabled cybercrime.

AI allows cybercriminals to refine their methods, making attacks more effective and challenging to detect. As a result, companies are increasingly recognising the importance of robust cybersecurity measures.

With threats escalating, there is an urgent call for insurers to offer more tailored cyber coverage and proactive services. The breach involving FC Barcelona is a stark reminder that no organisation is immune and that better risk assessment and resilience planning are now business essentials.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tailored AI agents improve work output—at a social cost

AI agents can significantly improve workplace productivity when tailored to individual personality types, according to new research from the Massachusetts Institute of Technology (MIT). However, the study also found that increased efficiency may come at the expense of human social interaction.

Led by Professor Sinan Aral and postdoctoral associate Harang Ju from MIT Sloan School of Management, the research revealed that human workers collaborating with AI agents completed tasks 60% more efficiently. This gain was partly attributed to a 23% reduction in social messages between team members.

The findings come amid a surge in the adoption of AI agents. A recent PwC survey found that 79% of senior executives had implemented AI agents in their organisations, with 66% reporting productivity gains. Agents are used in roles ranging from customer support to executive assistance and data analysis.

Aral and Ju developed a platform called Pairit (formerly MindMeld) to examine how AI affects team dynamics. In one of their experiments, over 2,000 participants were randomly assigned to human-only teams or teams mixed with AI agents. The groups were tasked with creating advertisements for a think tank.

Teams that included AI agents produced more content and higher-quality ad copy, but their human members communicated less, especially regarding emotional and rapport-building messages.

The study also highlighted the importance of matching AI traits to human personalities. For example, conscientious humans worked more effectively with open AI agents, whereas extroverted humans underperformed when paired with highly conscientious AI counterparts.

‘AI traits can complement human personalities to enhance collaboration,’ the researchers noted. However, they stressed that the same AI assistant may not suit everyone.

The insight underpins the launch of their new venture, Pairium AI, which aims to develop agentic AI that adapts to individual work styles. The company promotes its mission as ‘personalising the Agentic Age.’

Ju emphasised the importance of compatibility: ‘You don’t work the same way with all colleagues—AI should adapt in the same way.’

Devanshu Mehrotra, an analyst at Gartner, described the research as groundbreaking. ‘This opens the door to a much deeper conversation about the hyper-customisation of AI in the workplace.’

Looking ahead, Aral and Ju plan to explore how personalised AI can assist in negotiations, customer support, creative writing and coding tasks. Their findings suggest fitting AI to the user may become as critical as managing human team dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Heat action plans in India struggle to match rising urban temperatures

On 11 June, the India Meteorological Department (IMD) issued a red alert for Delhi as temperatures exceeded 45°C, with real-feel levels reaching 54°C.

Despite warnings, many outdoor workers in the informal sector continued working, highlighting challenges in protecting vulnerable populations during heatwaves.

The primary tool in India for managing extreme heat, the Heat Action Plan (HAP), is developed annually by city and state governments. While some regions, such as Ahmedabad and Tamil Nadu, have reported improved outcomes, most HAPs face implementation, funding, coordination, and data availability issues.

A 2023 study found that 95% of HAPs lacked detailed mapping of high-risk areas and vulnerable groups. Experts and non-governmental organisations recommend incorporating Geographic Information Systems (GIS) and remote sensing to improve targeting.

A study by the Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bengaluru found up to 9°C variation in land-surface temperatures within a two-square-kilometre ward, driven by differences in building types and green cover.

Delhi’s 2025 HAP introduced ward-level land surface temperature maps to identify high-risk areas. However, experts note that many datasets are adapted from agricultural monitoring tools and may not offer the spatial resolution needed for urban planning.

Organisations such as SEEDS and Chintan are using AI models like Sunny Lives to assess indoor heat exposure in low-income settlements to address this. The models estimate indoor temperatures and wet-bulb heat stress using data on roof materials and construction types, offering building-level insights.

Researchers argue that future HAPs should operate at the ward level and be supported by local heat vulnerability indexes, allowing for tailored interventions such as adjusted work hours, targeted hydration stations, and heat shelters.

Some announced measures—such as deploying water coolers and establishing day shelters—remain pending. Power outages in some areas also reduce the effectiveness of heat relief efforts.

Only eight Indian states officially classify heatwaves as disasters, limiting access to dedicated funding and emergency response mandates. Heatwaves are not recognised under national disaster legislation, which affects formal policy prioritisation.

Experts emphasise that building long-term heat resilience requires integrating HAPs with broader policy areas such as energy, water, public health, and employment. Several national programmes could support these efforts, but local implementation often suffers from limited awareness of available resources.

As climate risks grow, timely, data-driven, and locally tailored heat response strategies will be key to reducing health and economic impacts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI and the continued importance of cybersecurity fundamentals

The introduction of generative AI (GenAI) is influencing developments in cybersecurity across industries.

AI-powered tools are being integrated into systems such as end point detection and response (EDR) platforms and security operations centres (SOCs), while threat actors are reportedly exploring ways to use GenAI to automate known attack methods.

While GenAI presents new capabilities, common cybersecurity vulnerabilities remain a primary concern. Issues such as outdated patching, misconfigured cloud environments, and limited incident response readiness are still linked to most breaches.

Cybersecurity researchers have noted that GenAI is often used to scale familiar techniques rather than create new attack methods.

Social engineering, privilege escalation, and reconnaissance remain core tactics, with GenAI accelerating their execution. There are also indications that some GenAI systems can be manipulated to reveal sensitive data, particularly when not properly secured or configured.

Security experts recommend maintaining strong foundational practices such as access control, patch management, and configuration audits. These measures remain critical, regardless of the integration of advanced AI tools.

Some organisations may prioritise tool deployment over training, but research suggests that incident response skills are more effective when developed through practical exercises. Traditional awareness programmes may not sufficiently prepare personnel for real-time decision-making.

Some companies implement cyber drills that simulate attacks under realistic conditions to address this. These exercises can help teams practise protocols, identify weaknesses in workflows, and evaluate how systems perform under pressure. Such drills are designed to complement, not replace, other security measures.

Although GenAI is expected to continue shaping the threat landscape, current evidence suggests that most breaches stem from preventable issues. Ongoing training, configuration management, and response planning efforts remain central to organisational resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!