Asia emerges as global hub for telco‑powered AI infrastructure

Asia‑Pacific telecom operators are rapidly building sovereign AI factories and high‑performance data centres optimised for AI workloads by retrofitting existing facilities with NVIDIA GPUs and leveraging their fibre networks and system‑management skillsets.

Major Southeast‑Asian telcos, including Singtel (RE: AI), Indonesia’s Indosat Ooredoo Hutchison, Vietnam’s FPT, Malaysia’s YTL, and India’s Tata Communications, are pioneering cloud‑based AI platforms tailored to local enterprise needs. These investments often mirror national AI strategies focused on data sovereignty and regional self‑sufficiency.

Operators are pursuing a hybrid strategy, combining partnerships with hyperscalers like AWS and Azure for scale, while building local infrastructure to avoid vendor lock‑in, cost volatility, and compliance risks. Examples include SoftBank and KDDI in Japan, KT and Viettel in Southeast Asia, and Kazakhtelecom in Central Asia.

This telco‑led, on‑premises AI infrastructure boom marks a significant shift in global AI deployment, transforming operators from mere connectivity providers into essential sovereign AI enablers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI interviews leave job candidates in the dark

An increasing number of startups are now using AI to conduct video job interviews, often without making this clear to applicants. Senior software developers are finding themselves unknowingly engaging with automated systems instead of human recruiters.

Applicants are typically asked to submit videos responding to broad interview prompts, including examples and case studies, often without time constraints or human engagement.

AI processes these asynchronous interviews, which evaluate responses using natural language processing, facial cues and tone to assign scores.

Critics argue that this approach shifts the burden of labour onto job seekers, while employers remain unaware of the hidden costs and flawed metrics. There is also concern about the erosion of dignity in hiring, with candidates treated as data points rather than individuals.

Although AI offers potential efficiencies, the current implementation risks deepening dysfunctions in recruitment by prioritising speed over fairness, transparency and candidate experience. Until the technology is used more thoughtfully, experts advise job seekers to avoid such processes altogether.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI and big data to streamline South Korea’s drug evaluation processes

The Ministry of Food and Drug Safety (MFDS) of South Korea is modernising its drug review and evaluation processes by incorporating AI, big data, and other emerging technologies.

The efforts are being spearheaded by the ministry’s National Institute for Food and Drug Safety Evaluation (NIFDS).

Starting next year, NIFDS plans to apply AI to assist with routine tasks such as preparing review data.

The initial focus will be synthetic chemical drugs, gradually expanding to other product categories.

‘Initial AI applications will focus on streamlining repetitive tasks,’ said Jeong Ji-won, head of the Pharmaceutical and Medical Device Research Department at NIFDS.

‘The AI system is being developed internally, and we are evaluating its potential for real-world inspection scenarios. A phased approach is necessary due to the large volume of data required,’ Jeong added.

In parallel, NIFDS is exploring using big data in various regulatory activities.

One initiative involves applying big data analytics to enhance risk assessments during overseas GMP inspections. ‘Standardisation remains a challenge due to varying formats across facilities,’ said Sohn Kyung-hoon, head of the Drug Research Division.

‘Nonetheless, we’re working to develop a system that enhances the efficiency of inspections without relying on foreign collaborations.’ Efforts also include building domain-specific Korean-English translation models for safety documentation.

The institute also integrates AI into pharmaceutical manufacturing oversight and develops public data utilisation frameworks. The efforts include systems for analysing adverse drug reaction reports and standardising data inputs.

NIFDS is actively researching new analysis methods and safety protocols regarding impurity control.

‘We’re prioritising research on impurities such as NDMA,’ Sohn noted. Simultaneous detection methods are being tailored for smaller manufacturers.

New categorisation techniques are also being developed to monitor previously untracked substances.

On the biologics front, NIFDS aims to finalise its mRNA vaccine evaluation technology by year-end.

The five-year project supports the national strategy for improving infectious disease preparedness in South Korea, including work on delivery mechanisms and material composition.

‘This initiative is part of our broader strategy to improve preparedness for future infectious disease outbreaks,’ said Lee Chul-hyun, head of the Biologics Research Division.

Evaluation protocols for antibody drugs are still in progress. However, indirect support is being provided through guidelines and benchmarking against international cases. Separately, the Herbal Medicine Research Division is upgrading its standardised product distribution model.

The current use-based system will shift to a field-based one next year, extending to pharmaceuticals, functional foods, and cosmetics sectors.

‘We’re refining the system to improve access and quality control,’ said Hwang Jin-hee, head of the division. Collaboration with regional research institutions remains a key component of this work.’

NIFDS currently offers 396 standardised herbal medicines. The institute continues to develop new reference materials annually as part of its evolving strategy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI fluency is the new office software skill

As tools like ChatGPT, Copilot, and other generative AI systems become embedded in daily workflows, employers increasingly prioritise a new skill: AI fluency.

Much like proficiency in office software became essential in the past, knowing how to collaborate effectively with AI is now a growing requirement across industries.

But interacting with AI isn’t always intuitive. Many users encounter generic or unhelpful responses from chatbots and assume the technology is limited. In reality, AI systems rely heavily on the context they are given, and that’s where users come in.

Rather than considering AI as a search engine, it helps to see it as a partner needing guidance. A vague prompt like ‘write a proposal’ is unlikely to produce meaningful results. A better approach provides background, direction, and clear expectations.

One practical framework is CATS: context, angle, task, and style.

Context sets the stage. It includes your role, the situation, the audience, and constraints. For example, ‘I’m a nonprofit director writing a grant proposal for an environmental education program in urban schools’ offers much more to work with than a general request.

Angle defines the perspective. You can ask the AI to act as a peer reviewer, a mentor, or even a sceptical audience member. The roles help shape the tone and focus of the response.

Task clarifies the action you want. Instead of asking for help with a presentation, try ‘Suggest three ways to improve my opening slide for an audience of small business owners.’

Style determines the format and tone. Whether you need a formal report, a friendly email, or an outline in bullet points, specifying the style helps the AI deliver a more relevant output.

Beyond prompts, users can also practice context engineering—managing the environment around the prompt. The method includes uploading relevant documents, building on previous chats, or setting parameters through instructions. The steps help tailor responses more closely to your needs.

Think of prompting as a conversation, not a one-shot command. If the initial response isn’t ideal, clarify, refine, or build on it. Ask follow-up questions, adjust your instructions, or extract functional elements to develop further in a new thread.

That said, it’s essential to stay critical. AI systems can mimic natural conversation, but don’t truly understand the information they provide. Human oversight remains crucial. Always verify outputs, especially in professional or high-stakes contexts.

Ultimately, AI tools are powerful collaborators—but only when paired with clear guidance and human judgment. Provide the correct input, and you’ll often find the output exceeds expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UN reports surge in intangible investment driven by AI and data

Global investment is increasingly flowing into intangible assets such as software, data, and AI, marking what the UN has described as a ‘fundamental shift’ in how economies develop and compete.

According to a new report from the World Intellectual Property Organisation (WIPO), co-authored with the Luiss Business School based in Italy, investment in intellectual property-related assets grew three times faster in 2024 than spending on physical assets like buildings and machinery.

WIPO reported that total intangible investment reached $7.6 trillion across 27 high- and middle-income economies last year, up from $7.4 trillion in 2023—a real-term growth rate of 3 percent. In contrast, growth in physical asset investment has been more sluggish, hindered by high interest rates and a slow economic recovery.

‘We’re witnessing a fundamental shift in how economies grow and compete,’ said WIPO Director General Daren Tang. ‘While businesses have slowed down investing in factories and equipment during uncertain times, they’re doubling on intangible assets.’

The report highlights software and databases as the fastest-growing categories, expanding by more than 7 percent annually between 2013 and 2022. It attributes much of this trend to the accelerating adoption of AI, which requires significant investment in data infrastructure and training datasets.

WIPO also noted that the United States remains the global leader in absolute intangible investment, spending nearly twice as much as France, Germany, Japan, and the United Kingdom. However, Sweden topped the list regarding investment intensity, with intangible assets representing 16 per cent of its GDP.

The US, France, and Finland followed at 15 percent each, while India ranked ahead of several EU countries and Japan at an intensity of nearly 10 percent.

Despite economic disruptions over the past decade and a half, intangible investments have remained resilient, growing at a compound annual rate of 4 percent since 2008. By contrast, investment in tangible assets rose just 1 percent over the same period.

‘We are only at the beginning of the AI boom,’ said Sacha Wunsch-Vincent, head of WIPO’s economics and data analytics department.

He noted that in addition to driving demand for physical infrastructure like chips and servers, AI is now contributing to sustained investment growth in data and software, cornerstones of the intangible economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LG’s Exaone Path 2.0 uses AI to transform genetic testing

LG AI Research has introduced Exaone Path 2.0, an upgraded AI model designed to analyse pathology images for disease diagnosis, significantly reducing the time required for genetic testing.

The new model, unveiled Wednesday, can reportedly process pathology images in under a minute—a significant shift from conventional genetic testing methods that often take more than two weeks.

According to LG, the AI system offers enhanced accuracy in detecting genetic mutations and gene expression patterns by learning from detailed image patches and full-slide pathology data.

Developed by LG AI Research, a division of the LG Group, Exaone Path 2.0 is trained on over 10,000 whole-slide images (WSIs) and multiomics pairs, enabling it to integrate structural information with molecular biology insights. The company said it has achieved a 78.4 percent accuracy rate in predicting genetic mutations.

The model has also been tailored for specific applications in oncology, including lung and colorectal cancers, where it can help clinicians identify patient groups most likely to benefit from targeted therapies.

LG AI Research is collaborating with Professor Hwang Tae-hyun and his team at Vanderbilt University Medical Centre in the US to further its application in real-world clinical settings.

Their shared goal is to develop a multimodal medical AI platform that can support precision medicine directly within clinical environments.

Hwang, a key contributor to the US government’s Cancer Moonshot program and founder of the Molecular AI Initiative at Vanderbilt, emphasised that the aim is to create AI tools usable by clinicians in active medical practice, rather than limiting innovation to the lab.

In addition to oncology, LG AI Research plans to extend its multimodal AI initiatives into transplant rejection, immunology, and diabetes.

It is also collaborating with the Jackson Laboratory to support Alzheimer’s research and working with Professor Baek Min-kyung’s team at Seoul National University on next-generation protein structure prediction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WSIS+20 spotlights urgent need for global digital skills

The WSIS+20 High-Level Event in Geneva brought together global leaders to address the digital skills gap as one of the most urgent challenges of our time. As moderator Jacek Oko stated, digital technologies are rapidly reshaping work and learning worldwide, and equipping people with the necessary skills has become a matter of equity and economic resilience.

Dr Cosmas Zavazava of ITU emphasised that the real threat is not AI itself but people being displaced by others who know how to use it. ‘Workers risk losing their jobs, not because of AI, but because someone else knows how to use AI-based tools,’ he warned.

He underscored the importance of including informal workers like artisans and farmers in reskilling initiatives. He noted that 2.6 billion people remain offline while many of the 5.8 billion connected lack meaningful digital capabilities.

Costa Rica’s Vice Minister of Telecommunications, Hubert Vargas Picado shared how the country transformed into a regional tech hub by combining widespread internet access with workforce development. ‘Connectivity alone is insufficient,’ he said, advocating for cross-sectoral training systems and targeted scholarships, especially for rural youth and women.

WSIS+20 High-Level Event 2025
WSIS+20 spotlights urgent need for global digital skills 9

Similarly, Celeste Drake from the ILO pointed to gendered impacts of automation, revealing that administrative roles held mainly by women are most vulnerable. She insisted that upskilling must go hand-in-hand with policies promoting decent work, inclusive social dialogue, and regional equity.

The EU’s Michele Cervone d’Urso acknowledged the bloc’s shortfall in digital specialists and described Europe’s multipronged response, including digital academies and international talent partnerships.

Georgia’s Ekaterine Imedadze shared the success of embedding media literacy in public education and training local ambassadors to support digital inclusion in villages. Meanwhile, Anna Sophie Herken of GIZ warned of ‘massive talent waste’ in the Global South, where highly educated data workers are confined to low-value roles. Herken called for more equitable participation in the global digital economy and local AI innovation.

Private sector voices echoed the need for systemic change. EY’s Gillian Hinde stressed community co-creation and inclusive learning models, noting that only 22% of women pursue AI-related courses.

She outlined EY’s efforts to support neurodiverse learners and validate informal learning through digital badges. India’s Professor Himanshu Rai added a powerful sense of urgency, declaring, ‘AI is not the future. It’s already passing us by.’ He showcased India’s success in scaling low-cost digital access, training 60 million rural citizens, and adapting platforms to local languages and user needs.

His call for ‘compassionate’ policymaking underscored the moral imperative to act inclusively and decisively.

Speakers across sectors agreed that infrastructure without skills development risks widening the digital divide. Targeted interventions, continuous monitoring, and structural reform were repeatedly highlighted as essential.

The event’s parting thought, offered by Jacek Oko, summed up the transformative mindset required: ‘Let AI teach us about AI.’ The road ahead demands urgency, innovation, and collective action to ensure digital transformation uplifts all, especially the most vulnerable.

Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.

Over 2.3 million users hit by Chrome and Edge extension malware

A stealthy browser hijacking campaign has infected over 2.3 million users through Chrome and Edge extensions that appeared safe and even displayed Google’s verified badge.

According to cybersecurity researchers at Koi Security, the campaign, dubbed RedDirection, involves 18 malicious extensions offering legitimate features like emoji keyboards and VPN tools, while secretly tracking users and backdooring their browsers.

One of the most popular extensions — a colour picker developed by ‘Geco’ — continues to be available on the Chrome and Edge stores with thousands of positive reviews.

While it works as intended, the extension also hijacks sessions, records browsing activity, and sends data to a remote server controlled by attackers.

What makes the campaign more insidious is how the malware was delivered. The extensions began as clean, valuable tools, but malicious code was quietly added during later updates.

Due to how Google and Microsoft handle automatic updates, most users receive spyware without taking action or clicking anything.

Koi Security’s Idan Dardikman describes the campaign as one of the largest documented. Users are advised to uninstall any affected extensions, clear browser data, and monitor accounts for unusual activity.

Despite the serious breach, Google and Microsoft have not responded publicly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI chatbot suspended in Turkey following court order

A Turkish court has issued a nationwide ban on Grok, the AI chatbot developed by Elon Musk’s company xAI, following recent developments involving the platform.

The ruling, delivered on Wednesday by a criminal court in Ankara, instructed Turkey’s telecommunications authority to block access to the chatbot across the country. The decision came after public filings under Turkey’s internet law prompted a judicial review.

Grok, which is integrated into the X platform (formerly Twitter), recently rolled out an update to make the system more open and responsive. The update has sparked broader global discussions about the challenges of moderating AI-generated content in diverse regulatory environments.

In a brief statement, X acknowledged the situation and confirmed that appropriate content moderation measures had been implemented in response. The ban places Turkey among many countries examining the role of generative AI tools and the standards that govern their deployment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered imposter poses as US Secretary of State Rubio

An imposter posing as US Secretary of State Marco Rubio used an AI-generated voice and text messages to contact high-ranking officials, including foreign ministers, a senator, and a state governor.

The messages, sent through SMS and the encrypted app Signal, triggered an internal warning across the US State Department, according to a classified cable dated 3 July.

The individual created a fake Signal account using the name ‘Marco.Rubio@state.gov’ and began contacting targets in mid-June.

At least two received AI-generated voicemails, while others were encouraged to continue the chat via Signal. US officials said the aim was likely to gain access to sensitive information or compromise official accounts.

The State Department confirmed it is investigating the breach and has urged all embassies and consulates to remain alert. While no direct cyber threat was found, the department warned that shared information could still be exposed if targets were deceived.

A spokesperson declined to provide further details for security reasons.

The incident appears linked to a broader wave of AI-driven disinformation. A second operation, possibly tied to Russian actors, reportedly targeted Gmail accounts of journalists and former officials.

The FBI has warned of rising cases of ‘smishing’ and ‘vishing’ involving AI-generated content.

Experts now warn that deepfakes are becoming harder to detect, as the technology advances faster than defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!