Apple support scam targets users with real tickets

Cybercriminals are increasingly exploiting Apple’s support system to trick users into surrendering their accounts. Fraudsters open real support tickets in a victim’s name, which triggers official Apple emails and creates a false sense of legitimacy. These messages appear professional, making it difficult for users to detect the scam.

Victims often receive a flood of alerts, including two-factor authentication notifications, followed by phone calls from callers posing as Apple agents. The scammers guide users through steps that appear to secure their accounts, often directing them to convincing fake websites that request sensitive information.

Entering verification codes or following instructions on these fraudulent pages gives attackers access to the account. Even experienced users can fall prey because the emails come from official Apple domains, and the phone calls are carefully scripted to build trust.

Experts recommend checking support tickets directly within your Apple ID account, never sharing verification codes, and reviewing all devices linked to your account. Using antivirus software, activating two-factor authentication, and limiting personal information online further strengthen protection against such sophisticated phishing attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When AI use turns dangerous for diplomats

Diplomats are increasingly turning to tools like ChatGPT and DeepSeek to speed up drafting, translating, and summarising documents, a trend Jovan Kurbalija describes as the rise of ‘Shadow AI.’ These platforms, often used through personal accounts or consumer apps, offer speed and convenience that overstretched diplomatic services struggle to match.

But the same ease of use that makes Shadow AI attractive also creates a direct clash with diplomacy’s long-standing foundations of discretion and controlled ambiguity.

Kurbalija warns that this quiet reliance on commercial AI platforms exposes sensitive information in ways diplomats may not fully grasp. Every prompt, whether drafting talking points, translating notes, or asking for negotiation strategies, reveals assumptions, priorities, and internal positions.

Over time, this builds a detailed picture of a country’s concerns and behaviour, stored on servers outside diplomatic control and potentially accessible through foreign legal systems. The risk is not only data leakage but also the erosion of diplomatic craft, as AI-generated text encourages generic language, inflates documents, and blurs the national nuances essential to negotiation.

The problem, Kurbalija argues, is rooted in a ‘two-speed’ system. Technology evolves rapidly, while institutions adapt slowly.

Diplomatic services can take years to develop secure, in-house tools, while commercial AI is instantly available on any phone or laptop. Yet the paradox is that safe, locally controlled AI, based on open-source models, is technically feasible and financially accessible. What slows progress is not technology, but how ministries manage and value knowledge, their core institutional asset.

Rather than relying on awareness campaigns or bans, which rarely change behaviour, Kurbalija calls for a structural shift, where foreign ministries must build trustworthy, in-house AI ecosystems that keep all prompts, documents, and outputs within controlled government environments. That requires redesigning workflows, integrating AI into records management, and empowering the diplomats who have already experimented informally with these tools.

Only by moving AI from the shadows into a secure, well-governed framework, he argues, can diplomacy preserve its confidentiality, nuance, and institutional memory in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singapore and the EU advance their digital partnership

The European Union met Singapore in Brussels for the second Digital Partnership Council, reinforcing a joint ambition to strengthen cooperation across a broad set of digital priorities.

Both sides expressed a shared interest in improving competitiveness, expanding innovation and shaping common approaches to digital rules instead of relying on fragmented national frameworks.

Discussions covered AI, cybersecurity, online safety, data flows, digital identities, semiconductors and quantum technologies.

Officials highlighted the importance of administrative arrangements in AI safety. They explored potential future cooperation on language models, including the EU’s work on the Alliance for Language Technologies and Singapore’s Sea-Lion initiative.

Efforts to protect consumers and support minors online were highlighted, alongside the potential role of age verification tools.

Further exchanges focused on trust services and the interoperability of digital identity systems, as well as collaborative research on semiconductors and quantum technologies.

Both sides emphasised the importance of robust cyber resilience and ongoing evaluation of cybersecurity risks, rather than relying on reactive measures. The recently signed Digital Trade Agreement was welcomed for improving legal certainty, building consumer trust and reducing barriers to digital commerce.

The meeting between the EU and Singapore confirmed the importance of the partnership in supporting economic security, strengthening research capacity and increasing resilience in critical technologies.

It also reflected the wider priorities outlined in the European Commission’s International Digital Strategy, which placed particular emphasis on cooperation with Asian partners across emerging technologies and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coupang breach prompts scrutiny from South Korean regulators

South Korea is examining a significant data breach at Coupang after the retailer confirmed exposure of personal details linked to millions of users. Officials say the incident involves only domestic accounts. Regulators have opened a formal investigation.

Coupang first reported a small number of affected users, then revised its estimate to 33.7 million. The firm states that the leaked data includes names and contact details. It maintains that passwords and payment information remain secure.

Authorities believe the breach may date back several months and may involve an overseas server. Local media reports suspicion of a former employee in China. Investigators are assessing whether safety rules were breached.

The incident adds to a series of cyberattacks on major firms in South Korea this year. Commentators say repeated lapses point to structural weaknesses. Previous breaches at SK Telecom and Lotte Card remain fresh in public memory.

Coupang has apologised and warned customers to watch for scams using stolen information. Regulators pledge to enforce swiftly if violations are confirmed. The case has reignited debate over corporate safeguards and national cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea retailer admits worst-ever data leak

Coupang disclosed a major data breach on 30 November 2025 that exposed 33.7 million customer accounts. The leaked data includes names, email addresses, phone numbers, shipping addresses and some order history but excludes payment or login credentials.

The company said it first detected unauthorised access on 18 November. Subsequent investigations revealed that attacks likely began on 24 June through overseas servers and may involve a former employee’s still-active authentication key.

South Korean authorities launched an emergency probe to determine if Coupang violated data-protection laws. The government warned customers to stay alert to phishing and fraud attempts using the leaked information.

Cybersecurity experts say the breach may be one of the worst personal-data leaks in Korean history. Critics claim the incident underlines deep structural weaknesses in corporate cybersecurity practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fraud and scam cases push FIDReC workloads to new highs

FIDReC recorded 4,355 claims in FY2024/2025, marking its highest volume in twenty years and a sharp rise from the previous year. Scam activity and broader dispute growth across financial institutions contributed to the increase. Greater public awareness of the centre’s role also drove more filings.

Fraud and scam disputes climbed to 1,285 cases, up more than 50% and accounting for nearly half of all claims. FIDReC accepted 2,646 claims for handling, with early resolution procedures reducing formal caseload growth. The phased approach encourages direct negotiation between consumers and providers.

Chief Executive Eunice Chua said rising claim volumes reflect fast-evolving financial risks and increasingly complex products. National indicators show similar pressures, with Singapore ranked second globally for payment card scams. Insurance fraud reports also continued to grow during the year.

Compromised credentials accounted for most scam-related cases, often involving unauthorised withdrawals or card charges. Consumers reported incidents without knowing how their details were obtained. The share of such complaints rose markedly compared with the previous year.

Banks added safeguards on large digital withdrawals as part of wider anti-scam measures. Regulators introduced cooling-off periods, stronger information sharing and closer monitoring of suspicious activity. Authorities say the goal is to limit exposure to scams and reinforce public confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK to require crypto traders to report details from 2026

The UK government has confirmed that cryptocurrency traders will be required to report personal details to trading platforms from 1 January 2026. The move forms part of the Cryptoasset Reporting Framework (CAFR), aligned with an OECD agreement, and aims to improve compliance with existing tax rules.

Under the framework, exchanges must provide HM Revenue & Customs (HMRC) with customer information, including cryptocurrency transactions and tax reference numbers.

Traders who fail to supply required details could face fines of up to £300, while platforms may be fined the same amount per unreported customer. HMRC expects to raise up to £315 million by 2030 from the new reporting rules.

Experts warn exchanges may face challenges collecting accurate information, potentially passing compliance costs onto users. Some investors may initially turn to noncompliant platforms, but international standards are expected to drive global alignment over time.

The 2025 Budget also addressed the taxation of DeFi activities such as lending and staking. HMRC appears to favour taxing gains only when they are realised, although no final decision has been made and consultations with stakeholders will continue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vanity Fair publisher penalised for cookie breaches

France’s data regulator fined Les Publications Condé Nast €750,000 for unlawful cookie practices on vanityfair.fr. Investigators found consent-based cookies loading immediately when visitors landed on the site.

CNIL officials also noted unclear information describing several trackers as strictly necessary without explaining their true purposes. Users faced further issues when refusal tools failed to block or halt consent-based cookies.

Repeated non-compliance weighed heavily, as the company had already received a formal order in 2021. Earlier proceedings had been closed after corrective steps, yet later inspections showed renewed breaches.

The French regulator stated that millions of visitors were potentially affected by the unlawful tracking activity. The case highlights continuing enforcement efforts under Article 82 of France’s Data Protection Act.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot