SIM-binding mandate forces changes to WhatsApp use in India

India plans to change how major messaging apps operate under new rules requiring SIM binding and frequent re-verification. The directive obliges platforms to confirm that the original SIM remains active, altering long-standing habits around device switching. Services have 90 days to comply with the order.

The Department of Telecom says continuous SIM checks will reduce misuse by linking each account to a live subscriber identity. Companion tools such as WhatsApp Web will automatically log out every 6 hours. Users will need to relink sessions with a QR code to stay connected.

The rules apply to apps that rely on phone numbers, including WhatsApp, Signal, Telegram, and local platforms. The approach mirrors SIM-bound verification used in banking apps in India. It adds a deeper security layer that goes beyond one-time codes and registration checks.

The change may inconvenience people who use Wi-Fi-only tablets or older devices without an active SIM card. It also affects anyone who relies on WhatsApp Web for work or on multi-device setups under a single number. Messaging apps may need new login systems to ease the shift.

Officials argue that tighter controls are needed to limit cyber fraud and protect consumers. Users may still access services, but with reduced flexibility and more frequent verification. India’s move signals a broader push for stronger digital safeguards across core communications tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Dublin startup raises US$2.5 m to protect AI data with encryption

Mirror Security, founded at University College Dublin, has announced a US$2.5 million (approx. €2.15 million) pre-seed funding round to develop what it describes as the next generation of secure AI infrastructure.

The startup’s core product, VectaX, is a fully homomorphic encryption (FHE) engine designed for AI workloads. This technology allows AI systems to process, train or infer on data that remains encrypted, meaning sensitive or proprietary data never has to be exposed in plaintext, even during computation.

Backed by leading deep-tech investors such as Sure Valley Ventures (SVV) and Atlantic Bridge, Mirror Security plans to scale its engineering and AI-security teams across Ireland, the US and India, accelerate development of encrypted inferencing and secure fine-tuning, and target enterprise markets in the US.

As organisations increasingly adopt AI, often handling sensitive data, Mirror Security argues that conventional security measures (like policy-based controls) fall short. Its encryption native approach aims to provide cryptographic guarantees rather than trust-based assurances, positioning the company as a ‘trust layer’ for the emerging AI economy.

The Irish startup also announced a strategic partnership with Inception AI (a subsidiary of G42) to deploy its full AI security stack across enterprise and government systems. Mirror has also formed collaborations with major technology players including Intel, MongoDB, and others.

From a digital policy and global technology governance perspective, this funding milestone is significant. It underlines how the increasing deployment of AI, especially in enterprise and government contexts, is creating demand for robust, privacy-preserving infrastructure. Mirror Security’s model offers a potential blueprint for how to reconcile AI’s power with data confidentiality, compliance, and sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple support scam targets users with real tickets

Cybercriminals are increasingly exploiting Apple’s support system to trick users into surrendering their accounts. Fraudsters open real support tickets in a victim’s name, which triggers official Apple emails and creates a false sense of legitimacy. These messages appear professional, making it difficult for users to detect the scam.

Victims often receive a flood of alerts, including two-factor authentication notifications, followed by phone calls from callers posing as Apple agents. The scammers guide users through steps that appear to secure their accounts, often directing them to convincing fake websites that request sensitive information.

Entering verification codes or following instructions on these fraudulent pages gives attackers access to the account. Even experienced users can fall prey because the emails come from official Apple domains, and the phone calls are carefully scripted to build trust.

Experts recommend checking support tickets directly within your Apple ID account, never sharing verification codes, and reviewing all devices linked to your account. Using antivirus software, activating two-factor authentication, and limiting personal information online further strengthen protection against such sophisticated phishing attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When AI use turns dangerous for diplomats

Diplomats are increasingly turning to tools like ChatGPT and DeepSeek to speed up drafting, translating, and summarising documents, a trend Jovan Kurbalija describes as the rise of ‘Shadow AI.’ These platforms, often used through personal accounts or consumer apps, offer speed and convenience that overstretched diplomatic services struggle to match.

But the same ease of use that makes Shadow AI attractive also creates a direct clash with diplomacy’s long-standing foundations of discretion and controlled ambiguity.

Kurbalija warns that this quiet reliance on commercial AI platforms exposes sensitive information in ways diplomats may not fully grasp. Every prompt, whether drafting talking points, translating notes, or asking for negotiation strategies, reveals assumptions, priorities, and internal positions.

Over time, this builds a detailed picture of a country’s concerns and behaviour, stored on servers outside diplomatic control and potentially accessible through foreign legal systems. The risk is not only data leakage but also the erosion of diplomatic craft, as AI-generated text encourages generic language, inflates documents, and blurs the national nuances essential to negotiation.

The problem, Kurbalija argues, is rooted in a ‘two-speed’ system. Technology evolves rapidly, while institutions adapt slowly.

Diplomatic services can take years to develop secure, in-house tools, while commercial AI is instantly available on any phone or laptop. Yet the paradox is that safe, locally controlled AI, based on open-source models, is technically feasible and financially accessible. What slows progress is not technology, but how ministries manage and value knowledge, their core institutional asset.

Rather than relying on awareness campaigns or bans, which rarely change behaviour, Kurbalija calls for a structural shift, where foreign ministries must build trustworthy, in-house AI ecosystems that keep all prompts, documents, and outputs within controlled government environments. That requires redesigning workflows, integrating AI into records management, and empowering the diplomats who have already experimented informally with these tools.

Only by moving AI from the shadows into a secure, well-governed framework, he argues, can diplomacy preserve its confidentiality, nuance, and institutional memory in the age of AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singapore and the EU advance their digital partnership

The European Union met Singapore in Brussels for the second Digital Partnership Council, reinforcing a joint ambition to strengthen cooperation across a broad set of digital priorities.

Both sides expressed a shared interest in improving competitiveness, expanding innovation and shaping common approaches to digital rules instead of relying on fragmented national frameworks.

Discussions covered AI, cybersecurity, online safety, data flows, digital identities, semiconductors and quantum technologies.

Officials highlighted the importance of administrative arrangements in AI safety. They explored potential future cooperation on language models, including the EU’s work on the Alliance for Language Technologies and Singapore’s Sea-Lion initiative.

Efforts to protect consumers and support minors online were highlighted, alongside the potential role of age verification tools.

Further exchanges focused on trust services and the interoperability of digital identity systems, as well as collaborative research on semiconductors and quantum technologies.

Both sides emphasised the importance of robust cyber resilience and ongoing evaluation of cybersecurity risks, rather than relying on reactive measures. The recently signed Digital Trade Agreement was welcomed for improving legal certainty, building consumer trust and reducing barriers to digital commerce.

The meeting between the EU and Singapore confirmed the importance of the partnership in supporting economic security, strengthening research capacity and increasing resilience in critical technologies.

It also reflected the wider priorities outlined in the European Commission’s International Digital Strategy, which placed particular emphasis on cooperation with Asian partners across emerging technologies and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coupang breach prompts scrutiny from South Korean regulators

South Korea is examining a significant data breach at Coupang after the retailer confirmed exposure of personal details linked to millions of users. Officials say the incident involves only domestic accounts. Regulators have opened a formal investigation.

Coupang first reported a small number of affected users, then revised its estimate to 33.7 million. The firm states that the leaked data includes names and contact details. It maintains that passwords and payment information remain secure.

Authorities believe the breach may date back several months and may involve an overseas server. Local media reports suspicion of a former employee in China. Investigators are assessing whether safety rules were breached.

The incident adds to a series of cyberattacks on major firms in South Korea this year. Commentators say repeated lapses point to structural weaknesses. Previous breaches at SK Telecom and Lotte Card remain fresh in public memory.

Coupang has apologised and warned customers to watch for scams using stolen information. Regulators pledge to enforce swiftly if violations are confirmed. The case has reignited debate over corporate safeguards and national cyber resilience.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

South Korea retailer admits worst-ever data leak

Coupang disclosed a major data breach on 30 November 2025 that exposed 33.7 million customer accounts. The leaked data includes names, email addresses, phone numbers, shipping addresses and some order history but excludes payment or login credentials.

The company said it first detected unauthorised access on 18 November. Subsequent investigations revealed that attacks likely began on 24 June through overseas servers and may involve a former employee’s still-active authentication key.

South Korean authorities launched an emergency probe to determine if Coupang violated data-protection laws. The government warned customers to stay alert to phishing and fraud attempts using the leaked information.

Cybersecurity experts say the breach may be one of the worst personal-data leaks in Korean history. Critics claim the incident underlines deep structural weaknesses in corporate cybersecurity practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fraud and scam cases push FIDReC workloads to new highs

FIDReC recorded 4,355 claims in FY2024/2025, marking its highest volume in twenty years and a sharp rise from the previous year. Scam activity and broader dispute growth across financial institutions contributed to the increase. Greater public awareness of the centre’s role also drove more filings.

Fraud and scam disputes climbed to 1,285 cases, up more than 50% and accounting for nearly half of all claims. FIDReC accepted 2,646 claims for handling, with early resolution procedures reducing formal caseload growth. The phased approach encourages direct negotiation between consumers and providers.

Chief Executive Eunice Chua said rising claim volumes reflect fast-evolving financial risks and increasingly complex products. National indicators show similar pressures, with Singapore ranked second globally for payment card scams. Insurance fraud reports also continued to grow during the year.

Compromised credentials accounted for most scam-related cases, often involving unauthorised withdrawals or card charges. Consumers reported incidents without knowing how their details were obtained. The share of such complaints rose markedly compared with the previous year.

Banks added safeguards on large digital withdrawals as part of wider anti-scam measures. Regulators introduced cooling-off periods, stronger information sharing and closer monitoring of suspicious activity. Authorities say the goal is to limit exposure to scams and reinforce public confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!