AI and robotics could offset impact of aging populations in Asia

Declining fertility rates have long been considered a major risk to economic growth, but analysts suggest the outlook may not be entirely negative for several advanced Asian economies. Rising investment in AI and robotics is increasingly viewed as a way to offset labour shortages caused by ageing populations.

According to analysts at Bank of America Global Research, technological innovation driven by AI and robotics could support productivity growth even as workforces shrink. Strong ecosystems in semiconductors, technology hardware, and industrial machinery allow some countries in the region to deploy advanced technologies faster and at lower cost than many other parts of the world.

South Korea currently has the highest robot density in the world, with about 1,012 industrial robots per 10,000 manufacturing workers. China has 470 and Japan 419, both significantly above the global average of 162, according to 2024 figures from the International Federation of Robotics.

Analysts say governments across East Asia are accelerating the adoption of AI and robotics to address demographic pressures. In particular, China, South Korea, and Japan have expanded investments in robotics, AI systems, and advanced manufacturing technologies to maintain economic productivity.

Population projections highlight the scale of the challenge facing these economies. By 2050, about 37 percent of Japan’s population and nearly 40 percent of South Korea’s population are expected to be aged 65 or older, while China’s share could reach around 31 percent.

Despite concerns about slowing growth, economists argue that advances in AI and robotics could weaken the traditional link between economic output and workforce size. Automation technologies not only replace routine tasks but also enhance human productivity in many industries.

A study by the Bank of Korea estimated that demographic pressures could reduce the country’s gross domestic product by 16.5 percent between 2023 and 2050. However, wider adoption of AI and robotics could limit the decline to around 5.9 percent under favourable conditions.

Some analysts caution that the economic benefits of automation may not be evenly distributed. While AI and robotics can improve productivity, technological gains often benefit capital owners and highly skilled workers more than others.

Economists also warn that consumption may slow as the number of households declines, while governments may face greater fiscal pressure from higher pension and healthcare costs. Policymakers may need to invest in workforce retraining and education to help workers adapt to the growing role of AI and robotics in the economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study warns AI chatbots may reinforce delusional thinking

A new scientific review has raised concerns that AI chatbots could reinforce delusional thinking, particularly among people already vulnerable to psychosis. The review, published in The Lancet Psychiatry, summarises emerging evidence suggesting that chatbot interactions may validate or amplify delusional thinking in certain users.

The study examined reports and research discussing what some have described as ‘AI-associated delusions’. Dr Hamilton Morrin, a psychiatrist and researcher at King’s College London, analysed media reports and existing evidence exploring how chatbot responses might interact with psychotic symptoms.

Psychotic delusions generally fall into three categories: grandiose, romantic, and paranoid. Researchers say chatbots may unintentionally reinforce such beliefs because they often respond in ways that are supportive or affirming. In some reported cases, users received responses suggesting spiritual significance or implying that a higher entity was communicating through the chatbot.

Researchers emphasise that there is currently no clear evidence that AI systems can independently cause psychosis in individuals without prior vulnerability. However, interactions with chatbots could strengthen existing beliefs or accelerate the progression of delusional thinking in people already at risk.

Experts say the interactive nature of chatbots may intensify the effect. Unlike static sources of information such as videos or articles, chatbots can engage users directly and repeatedly, potentially reinforcing problematic beliefs more quickly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google Earth AI supports disease forecasting and public health planning

Researchers are increasingly combining geospatial data with predictive modelling to anticipate health risks.

In that context, Google has introduced new capabilities within Google Earth AI designed to help public health experts forecast outbreaks and identify vulnerable communities.

The system integrates environmental information such as weather patterns, flooding and air quality with population mobility data and health records.

These insights allow researchers to analyse how environmental conditions influence the spread of diseases, including Dengue Fever and Cholera.

Several research initiatives are already testing the models. In collaboration with the World Health Organisation Regional Office for Africa, forecasting tools combining Google’s time-series models with geospatial data improved cholera prediction accuracy by more than 35 percent.

Academic researchers are also applying the technology to other diseases. Scientists at the University of Oxford have used Earth AI datasets to improve six-month dengue forecasts in Brazil, helping local authorities prepare preventative responses.

The technology is also being tested for chronic disease analysis. In Australia, partnerships with health organisations are exploring how geospatial models can identify regional health needs and support preventative care strategies.

Combining environmental intelligence with health data could enable public health systems to shift from reactive crisis management to earlier detection and prevention of disease outbreaks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

China prioritises AI and tech self-reliance in new five-year plan

A new five-year development plan approved by lawmakers in Beijing places innovation and advanced technology at the centre of future economic growth. The strategy is designed to strengthen technological capabilities and position China as a leading global tech power.

The plan outlines ambitions to upgrade China’s industrial sector, expand domestic research capacity, and reduce reliance on foreign technologies. Priority sectors include AI, robotics, aerospace, biotechnology, and quantum computing. Officials see these industries as key drivers of economic growth over the coming decades.

AI features prominently in the strategy, with the term appearing dozens of times in the policy document. Beijing plans to expand AI-related industries, invest in large computing clusters, and support the development of advanced systems capable of performing complex tasks beyond traditional chatbots.

China also aims to increase spending on science and technology, with government research budgets rising by around 10 percent annually. The plan sets a target of expanding research and development investment by at least 7 percent per year, reflecting Beijing’s intention to strengthen domestic innovation capacity.

Efforts to achieve greater technological self-sufficiency come amid continued tensions with the United States over trade and technology restrictions. Export controls on advanced semiconductor technologies have highlighted China’s dependence on foreign chips, prompting the government to pursue breakthroughs across the semiconductor supply chain and emerging technologies.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU reviews X compliance proposal under Digital Services Act

X has submitted a compliance proposal to the European Commission outlining how it intends to modify its blue check verification system following regulatory concerns under the Digital Services Act.

The EU regulators concluded that the platform’s system allowed users to obtain verification simply by paying for a subscription without meaningful identity checks, potentially misleading users about the authenticity of accounts.

The Commission imposed a €120 million fine in December and gave the company 60 working days to propose corrective measures. Officials confirmed that X met the deadline for submitting a plan, which regulators will now assess.

The platform, owned by Elon Musk, must also pay the penalty while the Commission evaluates the proposed changes. The company has challenged the enforcement decision before the EU’s General Court.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

France pushes EU AI gigafactories to support European technology

In the EU, France is calling for planned European AI ‘gigafactories’ to focus on testing and scaling European technologies rather than primarily increasing demand for hardware from companies such as Nvidia.

The large computing facilities are intended to provide the infrastructure needed to train advanced AI systems. However, officials in France argue that the projects should strengthen Europe’s technological capabilities rather than reinforce reliance on foreign suppliers.

Several EU countries, including Poland, Austria and Lithuania, support using the infrastructure to improve Europe’s digital resilience.

The initiative forms part of the European Commission’s wider plans to expand computing capacity and support the development of a stronger European AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI says ChatGPT advertisements remain limited to the US

Despite speculation that the feature was expanding internationally, OpenAI has clarified that advertisements in ChatGPT are currently available only to users in the US.

Questions about a broader rollout emerged after references to advertisements appeared in the platform’s updated privacy policy. Some users interpreted the language as evidence that advertising would soon be introduced globally.

OpenAI said the policy update does not signal an immediate expansion. According to the company, advertising features are still being tested within the US as part of a gradual deployment strategy.

ChatGPT advertisements were introduced in February 2026 and appear below responses generated by the chatbot. The ads are shown only to logged-in users on free subscription tiers and are not displayed to users under eighteen.

Company representatives stated that advertising systems operate independently from the AI model that generates responses. According to OpenAI, advertisers cannot influence or modify the content produced by ChatGPT.

The company also said it does not share user conversations or personal chat histories with advertisers. However, advertisements may still be personalised based on user queries, which has prompted discussions about how conversational interfaces could shape consumer decisions.

OpenAI indicated that it is adopting a cautious, phased approach before considering any wider rollout of ChatGPT advertising features in other markets.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

OpenAI plans to integrate Sora video generation into ChatGPT

According to reports, OpenAI is preparing to integrate its AI video generator Sora directly into ChatGPT, a move that could expand the platform’s capabilities beyond text and image generation.

Sora currently operates as a standalone application and web service. Integrating the tool into ChatGPT could dramatically increase its visibility and usage, particularly given the chatbot’s massive global user base.

The company released an updated version of the model in 2025 that allows users to create, remix and even appear inside AI-generated videos. Bringing those features into ChatGPT would represent a major step toward making video generation a mainstream function within conversational AI systems.

Competition in the generative video market is intensifying. Companies, including Google, are developing similar technologies, with the company’s Gemini platform offering video creation powered by the Veo system. Other developers are also launching text-to-video models as the field rapidly expands.

Despite the potential growth, integrating video generation into ChatGPT may significantly increase operating costs. Running large AI systems requires vast computing resources and energy, and the chatbot already costs billions of dollars annually to operate.

Although OpenAI earns revenue from subscriptions, the majority of ChatGPT users currently use the free version. The company is therefore exploring additional monetisation strategies, including advertising and new premium services.

Integrating Sora into ChatGPT could therefore serve both strategic and financial goals, strengthening the platform’s position in the competitive generative AI market while expanding the types of content users can create.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

Deepfake attacks push organisations to rethink cybersecurity strategies

Organisations are strengthening their cybersecurity strategies as deepfake attacks become more convincing and easier to produce using generative AI.

Security experts alert that enterprises must move beyond basic detection tools and adopt layered security strategies to defend against the growing threat of deepfake attacks targeting communications and digital identity.

Many existing tools for identifying manipulated media are still imperfect. Digital forensics expert Hany Farid estimates that some systems used to detect deepfake attacks are only about 80 percent effective and often fail to explain how they determine whether an image, video, or audio recording is authentic. The lack of explainability also raises challenges for legal investigations and public verification of suspicious media.

Cybersecurity companies are creating new technologies to improve the detection of deepfake attacks by analysing slight signals that are difficult for humans to notice. Firms such as GetReal Security, Reality Defender, Deep Media, and Sensity AI examine lighting consistency, shadow angles, voice patterns, and facial movements. Environmental indicators such as device location, metadata, and IP information can also help security teams spot potential deepfake attacks.

However, experts say detection alone cannot fully protect organisations from deepfake attacks. Companies are increasingly conducting internal red-team exercises that simulate impersonation scenarios to expose weaknesses in verification procedures. Multi-factor authentication techniques can reduce the risk of employees responding to fraudulent communications.

Another emerging defence involves digital provenance systems designed to track the origin and modification history of digital content. Initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) embed cryptographically signed metadata into media files, allowing organisations to verify whether content linked to suspected deepfake attacks has been altered.

Recent experiments highlight how testing these threats can be. In February, cybersecurity company Reality Defender conducted an exercise with NATO by introducing deepfake media into a simulated military scenario. The findings showed how easily even experienced officials can struggle to identify manipulated communications, reinforcing calls for automated systems capable to detecting deepfake attacks across critical infrastructure.

As generative AI tools continue to advance, organisations are expected to combine detection technologies, stronger verification procedures, and provenance tracking to reduce the risks posed by deepfake attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hackers target WhatsApp and Signal in global encrypted messaging attacks

Foreign state-backed hackers are targeting accounts on WhatsApp and Signal used by government officials, diplomats, military personnel, and other high-value individuals, according to a security alert issued by the Portuguese Security Intelligence Service (SIS).

Portuguese authorities described the activity as part of a global cyber-espionage campaign aimed at gaining access to sensitive communications and extracting privileged information from Portugal and allied countries. The advisory did not identify the origin of the suspected attackers.

The warning follows similar alerts from other European intelligence agencies. Earlier this week, Dutch authorities reported that hackers linked to Russia were conducting a global campaign targeting the messaging accounts of officials, military personnel, and journalists.

Security agencies say the attackers are not exploiting vulnerabilities in the messaging platforms themselves. Both WhatsApp and Signal rely on end-to-end encryption designed to protect the content of messages from interception.

Instead, the campaign focuses on social engineering tactics that trick users into granting access to their accounts. According to the SIS report, attackers use phishing messages, malicious links, fake technical support requests, QR-code lures, and impersonation of trusted contacts.

The agency also warned that AI tools are increasingly being used to make such attacks more convincing. AI can help impersonate support staff, mimic familiar voices or identities, and conduct more realistic conversations through messages, phone calls, or video.

Once attackers gain access to an account, they may be able to read private messages, group chats, and shared files via WhatsApp and Signal. They can also impersonate the compromised user to launch additional phishing attacks targeting the victim’s contacts.

The alert echoes a previous warning issued by the Cybersecurity and Infrastructure Security Agency (CISA), which reported that encrypted messaging apps are increasingly being used as entry points for spyware and phishing campaigns targeting high-value individuals.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!