Dublin startup raises US$2.5 m to protect AI data with encryption

Mirror Security, founded at University College Dublin, has announced a US$2.5 million (approx. €2.15 million) pre-seed funding round to develop what it describes as the next generation of secure AI infrastructure.

The startup’s core product, VectaX, is a fully homomorphic encryption (FHE) engine designed for AI workloads. This technology allows AI systems to process, train or infer on data that remains encrypted, meaning sensitive or proprietary data never has to be exposed in plaintext, even during computation.

Backed by leading deep-tech investors such as Sure Valley Ventures (SVV) and Atlantic Bridge, Mirror Security plans to scale its engineering and AI-security teams across Ireland, the US and India, accelerate development of encrypted inferencing and secure fine-tuning, and target enterprise markets in the US.

As organisations increasingly adopt AI, often handling sensitive data, Mirror Security argues that conventional security measures (like policy-based controls) fall short. Its encryption native approach aims to provide cryptographic guarantees rather than trust-based assurances, positioning the company as a ‘trust layer’ for the emerging AI economy.

The Irish startup also announced a strategic partnership with Inception AI (a subsidiary of G42) to deploy its full AI security stack across enterprise and government systems. Mirror has also formed collaborations with major technology players including Intel, MongoDB, and others.

From a digital policy and global technology governance perspective, this funding milestone is significant. It underlines how the increasing deployment of AI, especially in enterprise and government contexts, is creating demand for robust, privacy-preserving infrastructure. Mirror Security’s model offers a potential blueprint for how to reconcile AI’s power with data confidentiality, compliance, and sovereignty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple support scam targets users with real tickets

Cybercriminals are increasingly exploiting Apple’s support system to trick users into surrendering their accounts. Fraudsters open real support tickets in a victim’s name, which triggers official Apple emails and creates a false sense of legitimacy. These messages appear professional, making it difficult for users to detect the scam.

Victims often receive a flood of alerts, including two-factor authentication notifications, followed by phone calls from callers posing as Apple agents. The scammers guide users through steps that appear to secure their accounts, often directing them to convincing fake websites that request sensitive information.

Entering verification codes or following instructions on these fraudulent pages gives attackers access to the account. Even experienced users can fall prey because the emails come from official Apple domains, and the phone calls are carefully scripted to build trust.

Experts recommend checking support tickets directly within your Apple ID account, never sharing verification codes, and reviewing all devices linked to your account. Using antivirus software, activating two-factor authentication, and limiting personal information online further strengthen protection against such sophisticated phishing attacks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia stands firm on under 16 social media ban

Australia’s government defended its under-16 social media ban ahead of its introduction on 10 December. Minister Anika Wells said she would not be pressured by major platforms opposing the plan.

Tech companies argued that bans may prove ineffective, yet Wells maintained firms had years to address known harms. She insisted parents required stronger safeguards after repeated failures by global platforms.

Critics raised concerns about enforcement and the exclusion of online gaming despite widespread worries about Roblox. Two teenagers also launched a High Court challenge, claiming the policy violated children’s rights.

Wells accepted rollout difficulties but said wider social gains in Australia justified firm action. She added that policymakers must intervene when unsafe operating models place young people at risk.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Singapore and the EU advance their digital partnership

The European Union met Singapore in Brussels for the second Digital Partnership Council, reinforcing a joint ambition to strengthen cooperation across a broad set of digital priorities.

Both sides expressed a shared interest in improving competitiveness, expanding innovation and shaping common approaches to digital rules instead of relying on fragmented national frameworks.

Discussions covered AI, cybersecurity, online safety, data flows, digital identities, semiconductors and quantum technologies.

Officials highlighted the importance of administrative arrangements in AI safety. They explored potential future cooperation on language models, including the EU’s work on the Alliance for Language Technologies and Singapore’s Sea-Lion initiative.

Efforts to protect consumers and support minors online were highlighted, alongside the potential role of age verification tools.

Further exchanges focused on trust services and the interoperability of digital identity systems, as well as collaborative research on semiconductors and quantum technologies.

Both sides emphasised the importance of robust cyber resilience and ongoing evaluation of cybersecurity risks, rather than relying on reactive measures. The recently signed Digital Trade Agreement was welcomed for improving legal certainty, building consumer trust and reducing barriers to digital commerce.

The meeting between the EU and Singapore confirmed the importance of the partnership in supporting economic security, strengthening research capacity and increasing resilience in critical technologies.

It also reflected the wider priorities outlined in the European Commission’s International Digital Strategy, which placed particular emphasis on cooperation with Asian partners across emerging technologies and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europol backs major takedown of Cryptomixer in Switzerland

Europol has supported a coordinated action week in Zurich, where Swiss and German authorities dismantled the illegal cryptocurrency mixing service Cryptomixer.

Three servers were seized in Switzerland, together with the cryptomixer.io domain, leading to the confiscation of more than €25 million in Bitcoin and over 12 terabytes of operational data.

Cryptomixer operated on both the clear web and the dark web, enabling cybercriminals to conceal the origins of illicit funds. The platform has mixed over €1.3 billion in Bitcoin since 2016, aiding ransomware groups, dark web markets, and criminals involved in drug trafficking, weapons trafficking, and credit card fraud.

Its randomised pooling system effectively blocked the traceability of funds across the blockchain.

Mixing services, such as Cryptomixer, are used to anonymise illegal funds before moving them to exchanges or converting them into other cryptocurrencies or fiat. The takedown halts further laundering and disrupts a key tool used by organised cybercrime networks.

Europol facilitated information exchange through the Joint Cybercrime Action Taskforce and coordinated operational meetings throughout the investigation. The agency deployed cybercrime specialists on the final day to provide on-site support and forensics.

Earlier efforts included support for the 2023 takedown of Chipmixer, then the largest mixer of its kind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake AI product photos spark concerns for online retailers

Chinese shoppers are increasingly using AI to create fake product photos to claim refunds, raising moral and legal concerns. The practice was highlighted during the Double 11 festival, with sellers receiving images of allegedly damaged goods.

Some buyers manipulated photos of fruit to appear mouldy or altered images of electric toothbrushes to look rusty. Clothing and ceramic product sellers also detected AI-generated inconsistencies, such as unnatural lighting, distorted edges, or visible signs of manipulation.

In some cases, requests were withdrawn after sellers asked for video evidence.

E-commerce platforms have historically favoured buyers, granting refunds even when claims seem unreasonable. In response, major platforms such as Taobao and Tmall removed the ‘refund only’ option and introduced buyer credit ratings based on purchase and refund histories.

Sellers are also increasingly turning to AI tools to verify images.

China’s AI content rules, effective from 1 September, require AI-generated material to be labelled, but detection remains difficult. Legal experts warn that using AI to claim refunds could constitute fraud, with calls for stricter enforcement to prevent abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta criticised for AI-generated adverts scams

Meta has faced criticism after numerous consumers reported being misled by companies using AI-generated adverts on Facebook and Instagram. The firms posed as UK businesses while shipping cheap goods from Asia, prompting claims that scams were ‘running rampant’ on the platforms.

Victims were persuaded by realistic adverts and AI-generated images but received poorly made clothing and jewellery. Several companies, including C’est La Vie, Mabel & Daisy, Harrison & Hayes, and Chester & Clare, were removed after investigations revealed fabricated backstories and fake shopfronts.

Consumer guides recommend vigilance, advising shoppers to check company websites, reviews, and use Trustpilot to verify legitimacy. Experts warn that overly perfect images, including AI-generated shopfronts or models, may signal fraudulent adverts.

Platforms such as Facebook and Instagram are urged to enforce stricter measures to prevent scams.

Meta stated it works with Stop Scams UK and encourages users to report suspicious adverts, while the Advertising Standards Authority continues to crack down on misleading online promotions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spar Switzerland expands crypto payments across its mobile app

Spar Switzerland has advanced retail crypto adoption by adding Bitcoin and over 100 digital assets to its mobile app. On-chain QR payments now replace third-party processors, following earlier pilots with the Lightning Network and Binance Pay.

Supportive national regulations continue to make Switzerland one of the most active retail environments for crypto payments. Merchants across the country have increasingly embraced digital assets, encouraged by clear legal frameworks and a population already familiar with fintech services.

The update follows previous pilots involving the Lightning Network and Binance Pay that began in 2025. Lessons from those trials helped shape Spar’s shift towards a fully integrated on-chain payment system.

Industry analysts view the expansion as a strong signal of growing consumer demand for flexible payment options. Broader access in major retail chains often accelerates mainstream adoption and encourages users and businesses to engage more confidently with the crypto economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

South Korea retailer admits worst-ever data leak

Coupang disclosed a major data breach on 30 November 2025 that exposed 33.7 million customer accounts. The leaked data includes names, email addresses, phone numbers, shipping addresses and some order history but excludes payment or login credentials.

The company said it first detected unauthorised access on 18 November. Subsequent investigations revealed that attacks likely began on 24 June through overseas servers and may involve a former employee’s still-active authentication key.

South Korean authorities launched an emergency probe to determine if Coupang violated data-protection laws. The government warned customers to stay alert to phishing and fraud attempts using the leaked information.

Cybersecurity experts say the breach may be one of the worst personal-data leaks in Korean history. Critics claim the incident underlines deep structural weaknesses in corporate cybersecurity practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot