AI tool could help detect domestic violence risk years earlier

Researchers in the United States have developed an AI system designed to help doctors identify patients who may be at risk of intimate partner violence. The tool analyses hospital data to detect patterns associated with abuse, potentially enabling healthcare professionals to intervene earlier.

Intimate partner violence refers to abuse from current or former partners and can lead to serious injuries, chronic pain, and long-term mental health problems. According to the European Commission, 18 percent of women who have had a partner reported experiencing physical or sexual violence from a partner in 2021.

The study, published in the journal Nature, examined hospital records from nearly 850 women who had experienced intimate partner violence and more than 5,200 similar patients in a control group. Researchers used the data to train three different machine learning systems to detect patterns associated with abuse.

One model analysed structured hospital data, such as age and medical history. A second model examined written clinical notes, including doctors’ observations and radiology reports. A third system combined both data types and achieved the strongest results, correctly identifying risk in 88 percent of cases.

Researchers found that the system could flag potential abuse more than three years before some patients later entered hospital-based intervention programmes. By analysing large datasets, the tool can detect patterns of physical trauma linked to abuse and alert clinicians so they can approach the issue carefully and offer support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Security warning issued over OpenClaw AI agent

Cybersecurity authorities have warned that vulnerabilities in the OpenClaw AI agent could expose sensitive data. Officials in China say weak default security settings may allow attackers to exploit the system.

Experts in China warned that prompt injection attacks could manipulate OpenClaw when it accesses online content. Malicious instructions hidden in websites may cause the AI agent to reveal confidential information.

Researchers have also identified risks involving link previews in messaging apps such as Telegram and Discord. Investigators in China say attackers could trick the system into sending sensitive data to malicious websites.

Security specialists in China advise organisations to strengthen protections around AI agents. Recommendations include isolating systems, limiting network access and installing trusted software components only.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Seoul deepens ties with global AI developers

South Korea is pursuing a partnership with AI company Anthropic as part of a national strategy to strengthen technological capabilities. Officials are working toward a memorandum of understanding with the developer of the Claude AI system.

The initiative follows discussions between South Korea’s science minister and Anthropic’s chief executive, Dario Amodei, during an AI summit in New Delhi. Authorities are also preparing for the company’s planned office opening in the city in 2026.

Government leaders in South Korea have already expanded cooperation with OpenAI. Policymakers say the strategy aims to build ties with leading global AI developers while supporting domestic innovation.

Officials are also developing a homegrown AI foundation model with local companies. The programme forms part of a national plan to position the country among the world’s leading AI powers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI and robotics could offset impact of aging populations in Asia

Declining fertility rates have long been considered a major risk to economic growth, but analysts suggest the outlook may not be entirely negative for several advanced Asian economies. Rising investment in AI and robotics is increasingly viewed as a way to offset labour shortages caused by ageing populations.

According to analysts at Bank of America Global Research, technological innovation driven by AI and robotics could support productivity growth even as workforces shrink. Strong ecosystems in semiconductors, technology hardware, and industrial machinery allow some countries in the region to deploy advanced technologies faster and at lower cost than many other parts of the world.

South Korea currently has the highest robot density in the world, with about 1,012 industrial robots per 10,000 manufacturing workers. China has 470 and Japan 419, both significantly above the global average of 162, according to 2024 figures from the International Federation of Robotics.

Analysts say governments across East Asia are accelerating the adoption of AI and robotics to address demographic pressures. In particular, China, South Korea, and Japan have expanded investments in robotics, AI systems, and advanced manufacturing technologies to maintain economic productivity.

Population projections highlight the scale of the challenge facing these economies. By 2050, about 37 percent of Japan’s population and nearly 40 percent of South Korea’s population are expected to be aged 65 or older, while China’s share could reach around 31 percent.

Despite concerns about slowing growth, economists argue that advances in AI and robotics could weaken the traditional link between economic output and workforce size. Automation technologies not only replace routine tasks but also enhance human productivity in many industries.

A study by the Bank of Korea estimated that demographic pressures could reduce the country’s gross domestic product by 16.5 percent between 2023 and 2050. However, wider adoption of AI and robotics could limit the decline to around 5.9 percent under favourable conditions.

Some analysts caution that the economic benefits of automation may not be evenly distributed. While AI and robotics can improve productivity, technological gains often benefit capital owners and highly skilled workers more than others.

Economists also warn that consumption may slow as the number of households declines, while governments may face greater fiscal pressure from higher pension and healthcare costs. Policymakers may need to invest in workforce retraining and education to help workers adapt to the growing role of AI and robotics in the economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta removes encrypted messaging from Instagram DMs

Meta will discontinue end-to-end encryption for Instagram direct messages starting in May 2026. The company said the feature saw limited use among Instagram users.

Users with encrypted chats will receive instructions on how to download messages or media before the feature ends. Meta confirmed the change through updates to its support pages and in-app notifications.

The decision comes amid ongoing debate about encryption and online safety on major social platforms. Critics argue that encrypted messaging can make it harder to detect harmful activity involving minors.

Meta said users seeking encrypted communication can continue using WhatsApp or Messenger. The company maintains end-to-end encryption for messaging services outside Instagram.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

French court upholds €40 million GDPR fine for Criteo

France’s highest administrative court has upheld a €40 million GDPR fine against advertising technology company Criteo. Regulators in France concluded that the firm failed to obtain valid consent for tracking users across websites.

The investigation began in 2018 following complaints from privacy groups and examined Criteo’s behavioural advertising model. Authorities in France said the company did not properly respect rights to access, erasure and transparency.

The ruling in France also confirmed that pseudonymous identifiers linked to browsing data can still qualify as personal data. Judges rejected arguments that such identifiers were effectively anonymous.

Privacy advocates say the decision strengthens GDPR enforcement across Europe. Experts in France argue that the case highlights growing scrutiny of online tracking practices used in digital advertising.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Study warns AI chatbots may reinforce delusional thinking

A new scientific review has raised concerns that AI chatbots could reinforce delusional thinking, particularly among people already vulnerable to psychosis. The review, published in The Lancet Psychiatry, summarises emerging evidence suggesting that chatbot interactions may validate or amplify delusional thinking in certain users.

The study examined reports and research discussing what some have described as ‘AI-associated delusions’. Dr Hamilton Morrin, a psychiatrist and researcher at King’s College London, analysed media reports and existing evidence exploring how chatbot responses might interact with psychotic symptoms.

Psychotic delusions generally fall into three categories: grandiose, romantic, and paranoid. Researchers say chatbots may unintentionally reinforce such beliefs because they often respond in ways that are supportive or affirming. In some reported cases, users received responses suggesting spiritual significance or implying that a higher entity was communicating through the chatbot.

Researchers emphasise that there is currently no clear evidence that AI systems can independently cause psychosis in individuals without prior vulnerability. However, interactions with chatbots could strengthen existing beliefs or accelerate the progression of delusional thinking in people already at risk.

Experts say the interactive nature of chatbots may intensify the effect. Unlike static sources of information such as videos or articles, chatbots can engage users directly and repeatedly, potentially reinforcing problematic beliefs more quickly.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google Earth AI supports disease forecasting and public health planning

Researchers are increasingly combining geospatial data with predictive modelling to anticipate health risks.

In that context, Google has introduced new capabilities within Google Earth AI designed to help public health experts forecast outbreaks and identify vulnerable communities.

The system integrates environmental information such as weather patterns, flooding and air quality with population mobility data and health records.

These insights allow researchers to analyse how environmental conditions influence the spread of diseases, including Dengue Fever and Cholera.

Several research initiatives are already testing the models. In collaboration with the World Health Organisation Regional Office for Africa, forecasting tools combining Google’s time-series models with geospatial data improved cholera prediction accuracy by more than 35 percent.

Academic researchers are also applying the technology to other diseases. Scientists at the University of Oxford have used Earth AI datasets to improve six-month dengue forecasts in Brazil, helping local authorities prepare preventative responses.

The technology is also being tested for chronic disease analysis. In Australia, partnerships with health organisations are exploring how geospatial models can identify regional health needs and support preventative care strategies.

Combining environmental intelligence with health data could enable public health systems to shift from reactive crisis management to earlier detection and prevention of disease outbreaks.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU reviews X compliance proposal under Digital Services Act

X has submitted a compliance proposal to the European Commission outlining how it intends to modify its blue check verification system following regulatory concerns under the Digital Services Act.

The EU regulators concluded that the platform’s system allowed users to obtain verification simply by paying for a subscription without meaningful identity checks, potentially misleading users about the authenticity of accounts.

The Commission imposed a €120 million fine in December and gave the company 60 working days to propose corrective measures. Officials confirmed that X met the deadline for submitting a plan, which regulators will now assess.

The platform, owned by Elon Musk, must also pay the penalty while the Commission evaluates the proposed changes. The company has challenged the enforcement decision before the EU’s General Court.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Europe aims to tighten AI rules and personal data standards

The European Council has proposed AI Act amendments, banning nudification tools and tightening rules for processing sensitive personal data. The move represents a key step in streamlining the continent’s digital legislation and improving safeguards for citizens.

Council officials highlighted the prohibition of AI systems that generate non-consensual sexual content or child sexual abuse material. The measure matches a European Parliament ban, showing strong support for tighter AI controls amid misuse concerns.

The proposal follows incidents such as the Grok chatbot producing millions of non-consensual intimate images, which sparked a global backlash and prompted an EU probe into the social media platform X and its AI features.

Other amendments reinstate strict rules for processing sensitive data to detect bias and require providers to register high-risk AI systems, even if claiming exemptions. Negotiations between the Council and Parliament will finalise the AI Act’s updated measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot