A hacker using the name Lovely has allegedly claimed to have accessed subscriber data belonging to WIRED and to have leaked details relating to around 2.3 million users.
The same individual also states that a wider Condé Nast account system covering more than 40 million users could be exposed in future leaks instead of ending with the current dataset.
Security researchers are reported to have matched samples of the claimed leak with other compromised data sources. The information is said to include names, email addresses, user IDs and timestamps instead of passwords or payment information.
Some researchers also believe that certain home addresses could be included, which would raise privacy concerns if verified.
The dataset is reported to be listed on Have I Been Pwned. However, no official confirmation from WIRED or Condé Nast has been issued regarding the authenticity, scale or origin of the claimed breach, and the company’s internal findings remain unknown until now.
The hacker has also accused Condé Nast of failing to respond to earlier security warnings, although these claims have not been independently verified.
Users are being urged by security professionals to treat unexpected emails with caution instead of assuming every message is genuine.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has blamed weak femtocell security at KT Corp for a major mobile payment breach that triggered thousands of unauthorised transactions.
Officials said the mobile operator used identical authentication certificates across femtocells and allowed them to stay valid for ten years, meaning any device that accessed the network once could do so repeatedly instead of being re-verified.
More than 22,000 users had identifiers exposed, and 368 people suffered unauthorised payments worth 243 million won.
Investigators also discovered that ninety-four KT servers were infected with over one hundred types of malware. Authorities concluded the company failed in its duty to deliver secure telecommunications services because its overall management of femtocell security was inadequate.
The government has now ordered KT to submit detailed prevention plans and will check compliance in June, while also urging operators to change authentication server addresses regularly and block illegal network access.
Officials said some hacking methods resembled a separate breach at SK Telecom, although there is no evidence that the same group carried out both attacks. KT said it accepts the findings and will soon set out compensation arrangements and further security upgrades instead of disputing the conclusions.
A separate case involving LG Uplus is being referred to police after investigators said affected servers were discarded, making a full technical review impossible.
The government warned that strong information security must become a survival priority as South Korea aims to position itself among the world’s leading AI nations.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Protecting AI agents from manipulation has become a top priority for OpenAI after rolling out a major security upgrade to ChatGPT Atlas.
The browser-based agent now includes stronger safeguards against prompt injection attacks, where hidden instructions inside emails, documents or webpages attempt to redirect the agent’s behaviour instead of following the user’s commands.
Prompt injection poses a unique risk because Atlas can carry out actions that a person would normally perform inside a browser. A malicious email or webpage could attempt to trigger data exposure, unauthorised transactions or file deletion.
Criminals exploit the fact that agents process large volumes of content across an almost unlimited online surface.
OpenAI has developed an automated red-team framework that uses reinforcement learning to simulate sophisticated attackers.
When fresh attack patterns are discovered, the models behind Atlas are retrained so that resistance is built into the agent rather than added afterwards. Monitoring and safety controls are also updated using real attack traces.
These new protections are already live for all Atlas users. OpenAI advises people to limit logged-in access where possible, check confirmation prompts carefully and give agents well-scoped tasks instead of broad instructions.
The company argues that proactive defence is essential as agentic AI becomes more capable and widely deployed.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers are warning that AI chatbots may treat dialect speakers unfairly instead of engaging with them neutrally. Studies across English and German dialects found that large language models often attach negative stereotypes or misunderstand everyday expressions, leading to discriminatory replies.
A study in Germany tested ten language models using dialects such as Bavarian and Kölsch. The systems repeatedly described dialect speakers as uneducated or angry, and the bias became stronger when the dialect was explicitly identified.
Similar findings emerged elsewhere, including UK council services and AI shopping assistants that struggled with African American English.
Experts argue that such patterns risk amplifying social inequality as governments and businesses rely more heavily on AI. One Indian job applicant even saw a chatbot change his surname to reflect a higher caste, showing how linguistic bias can intersect with social hierarchy instead of challenging it.
Developers are now exploring customised AI models trained with local language data so systems can respond accurately without reinforcing stereotypes.
Researchers say bias can be tuned out of AI if handled responsibly, which could help protect dialect speakers rather than marginalise them.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has introduced mandatory facial recognition for anyone registering a new SIM card or eSIM, whether in-store or online.
The live scan must match the photo on an official ID so that each phone number can be tied to a verified person instead of relying on paperwork alone.
Existing users are not affected, and the requirement applies only at the moment a number is issued.
The government argues that stricter checks are needed because telecom fraud has become industrialised and relies heavily on illegally registered SIM cards.
Criminal groups have used stolen identity data to obtain large volumes of numbers that can be swapped quickly to avoid detection. Regulators now see SIM issuance as the weakest link and the point where intervention is most effective.
Telecom companies must integrate biometric checks into onboarding, while authorities insist that facial data is used only for real-time verification and not stored. Privacy advocates warn that biometric verification creates new risks because faces cannot be changed if compromised.
They also question whether such a broad rule is proportionate when mobile access is essential for daily life.
The policy places South Korea in a unique position internationally, combining mandatory biometrics with defined legal limits. Its success will be judged on whether fraud meaningfully declines instead of being displaced.
A rule that has become a test case for how far governments should extend biometric identity checks into routine services.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
UK outdoor enthusiasts are warned not to rely solely on AI for tide times or weather. Errors recently stranded visitors on Sully Island, showing the limits of unverified information.
Maritime authorities recommend consulting official sources such as the UK Hydrographic Office and Met Office. AI tools may misread tables or local data, making human oversight essential for safety.
Mountain rescue teams report similar issues when inexperienced walkers used AI to plan trips. Even with good equipment, lack of judgement can turn minor errors into dangerous situations.
Practical experience, professional guidance, and verified data remain critical for safe outdoor activities. Relying on AI alone can create serious risks, especially on tidal beaches and challenging mountain routes.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korean e-commerce firm Coupang has apologised for a major data breach affecting more than 33 million users and announced a compensation package worth 1.69 trillion won. Founder Kim Bom acknowledged the disruption caused, following public and political backlash over the incident.
Under the plan, affected customers will receive vouchers worth 50,000 won, usable Choi Minonly on Coupang’s own platforms. The company said the measure was intended to compensate users, but the approach has drawn criticism from lawmakers and consumer groups.
Choi Min-hee, a lawmaker from the ruling Democratic Party, criticised the decision in a social media post, arguing that the vouchers were tied to services with limited use. She accused Coupang of attempting to turn the crisis into a business opportunity.
Consumer advocacy groups echoed these concerns, saying the compensation plan trivialised the seriousness of the breach. They argued that limiting compensation to vouchers resembled a marketing strategy rather than meaningful restitution for affected users.
The controversy comes as the National Assembly of South Korea prepares to hold hearings on Coupang. While the company has admitted negligence, it has declined to appear before lawmakers amid scrutiny of its handling of the breach.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has launched GPT-5.2, highlighting improved safety performance in conversations involving mental health. The company said the update strengthens how its models respond to signs of suicide, self-harm, emotional distress, and reliance on the chatbot.
The release follows criticism and legal challenges accusing ChatGPT of contributing to psychosis, paranoia, and delusional thinking in some users. Several cases have highlighted the risks of prolonged emotional engagement with AI systems.
In response to a wrongful death lawsuit involving a US teenager, OpenAI denied responsibility while stating that ChatGPT encouraged the user to seek help. The company also committed to improving responses when users display warning signs of mental health crises.
OpenAI said GPT-5.2 produces fewer undesirable responses in sensitive situations than earlier versions. According to the company, the model scores higher on internal safety tests related to self-harm, emotional reliance, and mental health.
The update builds on OpenAI’s use of a training approach known as safe completion, which aims to balance helpfulness and safety. Detailed performance information has been published in the GPT-5.2 system card.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Europe’s healthcare systems turned increasingly to AI in 2025, using new tools to predict disease, speed diagnosis, and reduce administrative workloads.
Countries including Finland, Estonia and Spain adopted AI to train staff, analyse medical data and detect illness earlier, while hospitals introduced AI scribes to free up doctors’ time with patients.
Researchers also advanced AI models able to forecast more than a thousand conditions many years before diagnosis, including heart disease, diabetes and certain cancers.
Further tools detected heart problems in seconds, flagged prostate cancer risks more quickly and monitored patients recovering from stent procedures instead of relying only on manual checks.
Experts warned that AI should support clinicians rather than replace them, as doctors continue to outperform AI in emergency care and chatbots struggle with mental health needs.
Security specialists also cautioned that extremists could try to exploit AI to develop biological threats, prompting calls for stronger safeguards.
Despite such risks, AI-driven approaches are now embedded across European medicine, from combating antibiotic-resistant bacteria to streamlining routine paperwork. Policymakers and health leaders are increasingly focused on how to scale innovation safely instead of simply chasing rapid deployment.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China has proposed new rules to restrict AI chatbots from influencing human emotions in ways that could lead to suicide or self-harm. The Cyberspace Administration released draft regulations, open for public comment until late January.
The measures target human-like interactive AI services, including emotionally responsive AI chatbots, that simulate personality and engage users through text, images, audio, or video. Officials say the proposals signal a shift from content safety towards emotional safety as AI companions gain popularity.
Under the draft rules, AI chatbot services would be barred from encouraging self-harm, emotional manipulation, or obscene, violent, or gambling-related content. Providers would be required to involve human moderators if users express suicidal intent.
Additional provisions would strengthen safeguards for minors, including guardian consent and usage limits for emotionally interactive systems. Platforms would also face security assessments and interaction reminders when operating services with large user bases.
Experts say the proposals could mark the world’s first attempt to regulate emotionally responsive AI systems. The move comes as China-based chatbot firms pursue public listings and as global scrutiny grows over how conversational AI affects mental health and user behaviour.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!