Italy orders Meta to lift WhatsApp AI restrictions

Italy’s competition authority has ordered Meta to halt restrictions limiting rival AI chatbots on WhatsApp. Regulators say the measures may distort competition as Meta integrates its own AI services.

The Italian watchdog argues Meta’s conduct risks restricting market access and slowing technical development. Officials warned that continued enforcement could cause lasting harm to competition and consumer choice.

Meta rejected the ruling and confirmed plans to appeal, calling the decision unfounded. The company stated that WhatsApp Business was never intended to serve as a distribution platform for AI services.

The case forms part of a broader European push to scrutinise dominant tech firms. Regulators are increasingly focused on the integration of AI across platforms with entrenched market power.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea fake news law sparks fears for press freedom

A significant debate has erupted in South Korea after the National Assembly passed new legislation aimed at tackling so-called fake news.

The revised Information and Communications Network Act bans the circulation of false or fabricated information online. It allows courts to impose punitive damages up to five times the losses suffered when media outlets or YouTubers intentionally spread disinformation for unjust profit.

Journalists, unions and academics warn that the law could undermine freedom of expression and weaken journalism’s watchdog function instead of strengthening public trust.

Critics argue that ambiguity over who decides what constitutes fake news could shift judgement away from the courts and toward regulators or platforms, encouraging self-censorship and increasing the risk of abusive lawsuits by influential figures.

Experts also highlight the lack of strong safeguards in South Korea against malicious litigation compared with the US, where plaintiffs must prove fault by journalists.

The controversy reflects more profound public scepticism about South Korean media and long-standing reporting practices that sometimes rely on relaying statements without sufficient verification, suggesting that structural reform may be needed instead of rapid, punitive legislation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Aflac confirms large-scale data breach following cyber incident

US insurance firm Aflac has confirmed that a cyberattack disclosed in June affected around 22.65 million people. The breach involved the theft of sensitive personal and health information; however, the company initially did not specify the number of individuals affected.

In filings with the Texas attorney general, Aflac said the compromised data includes names, dates of birth, home addresses, government-issued identification numbers, driving licence details, and Social Security numbers. Medical and health insurance information was also accessed during the incident.

A separate filing with the Iowa attorney general suggested the attackers may be linked to a known cybercriminal organisation. Federal law enforcement and external cybersecurity specialists indicated the group had been targeting the insurance sector more broadly.

Security researchers have linked a wave of recent insurance-sector breaches to Scattered Spider, a loosely organised group of predominantly young, English-speaking hackers. The timing and targeting of the Aflac incident align with the group’s activity.

The US company stated that it has begun notifying the affected individuals. The company, which reports having around 50 million customers, did not respond to requests for comment. Other insurers, including Erie Insurance and Philadelphia Insurance Companies, reported breaches during the same period.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chest X-rays gain new screening potential through AI

AI is extending the clinical value of chest X-rays beyond lung and heart assessment. Researchers are investigating whether routine radiographs can support broader disease screening without the need for additional scans. Early findings suggest existing images may contain underused diagnostic signals.

A study in Radiology: Cardiothoracic Imaging examined whether AI could detect hepatic steatosis from standard frontal chest X-rays. Researchers analysed more than 6,500 images from over 4,400 patients across two institutions. Deep learning models were trained and externally validated.

The AI system achieved area-under-curve scores above 0.8 in both internal and external tests. Saliency maps showed predictions focused near the diaphragm, where part of the liver appears on chest X-rays. Results suggest that reliable signal extraction can be achieved from routine imaging.

Researchers argue the approach could enable opportunistic screening during standard care. Patients flagged by AI could be referred for a dedicated liver assessment when appropriate. The method adds clinical value without increasing imaging costs or radiation exposure.

Experts caution that the model is not a standalone diagnostic tool and requires further prospective validation. Integration with clinical and laboratory data remains necessary to reduce false positives. If validated, AI-enhanced X-rays could support scalable risk stratification.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots reshape learning habits and critical thinking debates

Use of AI chatbots for everyday tasks, from structuring essays to analysing data, has become widespread. Researchers are increasingly examining whether reliance on such tools affects critical thinking and learning. Recent studies suggest a more complex picture than simple decline.

A research study published by MIT found reduced cognitive activity among participants who used ChatGPT to write essays. Participants also showed weaker recall than those who completed tasks without AI assistance, raising questions about how learning develops when writing is outsourced.

Similar concerns emerged from studies by Carnegie Mellon University and Microsoft. Surveys of white-collar workers linked higher confidence in AI tools with lower levels of critical engagement, prompting warnings about possible overreliance.

Studies involving students present a more nuanced outcome. Research published by Oxford University Press found that many pupils felt AI supported skills such as revision and creativity. At the same time, some reported that tasks became too easy, limiting deeper learning.

Experts emphasise that outcomes depend on how AI tools are used. Educators argue for clearer guidance, transparency, and further research into long-term effects. Used as a tutor rather than a shortcut, AI may support learning rather than weaken it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI analysis charts shifts in ageing research trends

A new research paper published in Aging-US uses AI to map a century of global ageing research. The study analyses how scientific priorities have shifted over time. Underexplored areas are also identified.

Researchers analysed more than 460,000 scientific abstracts published between 1925 and 2023. Natural language processing and machine learning were used to cluster themes and track trends. The aim was to provide an unbiased view of the field’s evolution.

Findings show a shift from basic biological studies toward clinical research, particularly age-related diseases such as Alzheimer’s and dementia. Basic science continues to focus on cellular mechanisms. Limited overlap persists between laboratory and clinical research.

Several fast-growing topics, including autophagy, RNA biology, and nutrient sensing, remain weakly connected to clinical applications. Strong links endure in areas such as cancer and ageing. Other associations, including epigenetics and autophagy, are rarely explored.

The analysis highlights gaps that may shape future ageing research priorities. AI-based mapping provides insights into how funding and policy shape focus areas. Greater integration could support more effective translation into clinical outcomes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea tightens ID checks with facial verification for phone accounts

Mandatory facial verification will be introduced in South Korea for anyone opening a new mobile phone account, as authorities try to limit identity fraud.

Officials said criminals have been using stolen personal details to set up phone numbers that later support scams such as voice phishing instead of legitimate services.

Major mobile carriers, including LG Uplus, Korea Telecom and SK Telecom, will validate users by matching their faces against biometric data stored in the PASS digital identity app.

Such a requirement expands the country’s identity checks rather than replacing them outright, and is intended to make it harder for fraud rings to exploit stolen data at scale.

The measure follows a difficult year for data security in South Korea, marked by cyber incidents affecting more than half the population.

SK Telecom reported a breach involving all 23 million of its customers and now faces more than $1.5 billion in penalties and compensation.

Regulators also revealed that mobile virtual network operators were linked to 92% of counterfeit phones uncovered in 2024, strengthening the government’s case for tougher identity controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepMind chief renews the AI intelligence debate

Amid growing attention on AI, Google DeepMind chief Demis Hassabis has argued that future systems could learn anything humans can.

He suggested that as technology advances, AI may no longer remain confined to single tasks. Instead of specialising narrowly, it could solve different kinds of problems and continue improving over time.

Supporters say rapid progress already shows how powerful the technology has become.

Other experts disagree and warn that human intelligence remains deeply complex. People rely on emotions, personal experience and social understanding when they think, while machines depend on data and rules.

Critics argue that comparing AI with the human mind oversimplifies how intelligence really works, and that even people vary widely in ability.

Elon Musk has supported the idea that AI could eventually learn as much as humans, while repeating his long-standing view that powerful systems must be handled carefully. His backing has intensified the debate, given his influence in the technology world.

The discussion matters because highly capable AI could reshape work, education and creativity, raising questions over safety and control.

For now, AI performs specific tasks extremely well yet cannot think or feel like humans, and no one can say for certain whether true human-level intelligence will ever emerge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Romania investigates large scale cyber attack on national water body

Authorities in Romania have confirmed a severe ransomware attack on the national water administration ‘Apele Române’, which encrypted around 1,000 IT systems across most regional water basin offices.

Attackers used Microsoft’s BitLocker tool to lock files and then issued a ransom note demanding contact within seven days, although cybersecurity officials continue to reject any negotiation with criminals.

The disruption affected email systems, databases, servers and workstations instead of operational technology, meaning hydrotechnical structures and critical water management systems continued to function safely.

Staff coordinated activity by radio and telephone, and flood defence operations remained in normal working order while investigations and recovery progressed.

National cyber agencies, including the National Directorate of Cyber Security and the Romanian Intelligence Service’s cyber centre, are now restoring systems and moving to include water infrastructure within the state cyber protection framework.

The case underlines how ransomware groups increasingly target essential utilities rather than only private companies, making resilience and identity controls a strategic priority.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea plans huge fines for major data breaches

Prime Minister Kim Min-seok has called for punitive fines of up to 10 percent of company sales for repeated and serious data breaches, as public anger grows over large-scale leaks.

The government is seeking swift legislation to impose stronger sanctions on firms that fail to safeguard personal data, reflecting President Lee Jae Myung’s stance that violations require firm penalties instead of lenient warnings.

Kim said corporate responses to recent breaches had fallen far short of public expectations and stressed that companies must take full responsibility for protecting customer information.

Under the proposed framework, affected individuals would receive clearer notifications that include guidance on their rights to seek damages.

The government of South Korea also plans to strengthen investigative powers through coercive fines for noncompliance, while pursuing rapid reforms aimed at preventing further harm.

The tougher line follows a series of major incidents, including a leak at Shinhan Card that affected around 190,000 merchant records and a large-scale breach at Coupang that exposed the data of 33.7 million users.

Officials have described the Coupang breach as a serious social crisis that has eroded public trust.

Authorities have launched an interagency task force to identify responsibility and ensure tighter data protection across South Korea’s digital economy instead of relying on voluntary company action.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!