Fraud and scam cases push FIDReC workloads to new highs

FIDReC recorded 4,355 claims in FY2024/2025, marking its highest volume in twenty years and a sharp rise from the previous year. Scam activity and broader dispute growth across financial institutions contributed to the increase. Greater public awareness of the centre’s role also drove more filings.

Fraud and scam disputes climbed to 1,285 cases, up more than 50% and accounting for nearly half of all claims. FIDReC accepted 2,646 claims for handling, with early resolution procedures reducing formal caseload growth. The phased approach encourages direct negotiation between consumers and providers.

Chief Executive Eunice Chua said rising claim volumes reflect fast-evolving financial risks and increasingly complex products. National indicators show similar pressures, with Singapore ranked second globally for payment card scams. Insurance fraud reports also continued to grow during the year.

Compromised credentials accounted for most scam-related cases, often involving unauthorised withdrawals or card charges. Consumers reported incidents without knowing how their details were obtained. The share of such complaints rose markedly compared with the previous year.

Banks added safeguards on large digital withdrawals as part of wider anti-scam measures. Regulators introduced cooling-off periods, stronger information sharing and closer monitoring of suspicious activity. Authorities say the goal is to limit exposure to scams and reinforce public confidence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK to require crypto traders to report details from 2026

The UK government has confirmed that cryptocurrency traders will be required to report personal details to trading platforms from 1 January 2026. The move forms part of the Cryptoasset Reporting Framework (CAFR), aligned with an OECD agreement, and aims to improve compliance with existing tax rules.

Under the framework, exchanges must provide HM Revenue & Customs (HMRC) with customer information, including cryptocurrency transactions and tax reference numbers.

Traders who fail to supply required details could face fines of up to £300, while platforms may be fined the same amount per unreported customer. HMRC expects to raise up to £315 million by 2030 from the new reporting rules.

Experts warn exchanges may face challenges collecting accurate information, potentially passing compliance costs onto users. Some investors may initially turn to noncompliant platforms, but international standards are expected to drive global alignment over time.

The 2025 Budget also addressed the taxation of DeFi activities such as lending and staking. HMRC appears to favour taxing gains only when they are realised, although no final decision has been made and consultations with stakeholders will continue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DeepSeek opens access to gold-level maths AI

Chinese AI firm DeepSeek has released the first open AI model capable of achieving gold-medal results at the International Mathematical Olympiad. Math-V2 is now freely available on Hugging Face and GitHub, allowing developers to repurpose it and run it locally.

Gold-level performance at the IMO is remarkably rare, with only a small share of human participants reaching the top tier. DeepSeek aims to make such advanced mathematical capabilities accessible to researchers and developers who previously lacked access to comparable systems.

The company said its model achieved gold-level scores in both this year’s Olympiad and the Chinese Mathematical Olympiad. The results relied on strong theorem-proving skills and a new ‘self-verification’ method for reasoning without known solutions.

Observers said the open release could lower barriers to advanced maths AI, while US firms keep their Olympiad-level systems restricted. Supporters of open-source development welcomed the move as a significant step toward democratising advanced scientific tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU members raise concerns over the Digital Networks Act

Six EU member states urged the Union to reconsider the direction of the Digital Networks Act by asking for greater room for national decision-making.

Their joint position emphasised the wish to retain authority over frequency management and questioned proposals that could expand telecom rules into the digital services sector.

An intervention that followed previous debates at the ministerial level, where governments signalled reluctance to introduce new interconnection measures and stressed the need to consider the specific roles of different actors across the value chain instead of applying a single regulatory model to all.

Consumer groups and business organisations voiced further doubts as plans for network fees resurfaced in recent discussions. They argued that earlier consultations had already shown major risks for competition, innovation, and net neutrality, making renewed consideration unnecessary.

The US–EU trade agreement added another layer by including a clause that commits the EU to avoid such fees, leaving open how the Commission will balance domestic expectations with international obligations.

The Digital Networks Act faced an additional setback when the EU’s Regulatory Scrutiny Board delivered a negative opinion about its preparedness. That view disrupted earlier hopes of releasing a draft before the end of the year.

Even so, the Commission is expected to present an updated proposal in January 2026, setting the stage for one of the most difficult legislative debates of the coming year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Concerns grow over WhatsApp rules as Italy probes Meta AI practices

Italy’s competition authority has launched an investigation into Meta over potential dominance in AI chatbots. Regulators are reviewing the new WhatsApp Business terms and upcoming Meta AI features. They say the changes could restrict rivals’ access to the platform.

Officials in Italy warn that the revised conditions may limit innovation and reduce consumer choice in emerging AI services. The concerns fall under Article 102 TFEU. The authority states that early action may be necessary to prevent distortions.

The case expands an existing Italian investigation into Meta and its regional subsidiaries. Regulators say technical integration of Meta AI could strengthen exclusionary effects. They argue that WhatsApp’s scale gives Meta significant structural advantages.

Low switching rates among users may entrench Meta’s market position further in Italy and beyond. Officials say rival chatbot providers would struggle to compete if access is constrained. They warn that competition could be permanently harmed.

Meta has announced significant new AI investments in the United States. Italian regulators say this reflects the sector’s growing influence. They argue that strong oversight is needed to ensure fair access to key platforms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia moves to curb nudify tools after eSafety action

A major provider of three widely used nudify services has cut off Australian access after enforcement action from eSafety.

The company received an official warning in September for allowing its tools to be used to produce AI-generated material that harmed children.

A withdrawal that follows concerns about incidents involving school students and repeated reminders that online services must meet Australia’s mandatory safety standards.

eSafety stated that Australia’s codes and standards are encouraging companies to adopt stronger safeguards.

The Commissioner noted that preventing the misuse of consumer tools remains central to reducing the risk of harm and that more precise boundaries can lower the likelihood of abuse affecting young people.

Attention has also turned to underlying models and the hosting platforms that distribute them.

Hugging Face has updated its terms to require users to take steps to mitigate the risks associated with uploaded models, including preventing misuse for generating harmful content. The company is required to act when reports or internal checks reveal breaches of its policies.

eSafety indicated that failure to comply with industry codes or standards can lead to enforcement measures, including significant financial penalties.

The agency is working with the government on further reforms intended to restrict access to nudify tools and strengthen protections across the technology stack.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU prepares tougher oversight for crypto operators

EU regulators are preparing for a significant shift in crypto oversight as new rules take effect on 1 January 2026. Crypto providers must report all customer transactions and holdings in a uniform digital format, giving tax authorities broader visibility across the bloc.

The DAC8 framework brings mandatory cross-border data sharing, a centralised operator register and unique ID numbers for each reporting entity. These measures aim to streamline supervision and enhance transparency, even though data on delisted firms must be preserved for up to twelve months.

Privacy concerns are rising as the new rules expand the travel rule for transfers above €1,000 and introduce possible ownership checks on private wallets. Combined with MiCA and upcoming AML rules, regulators gain deeper insight into user behaviour, wallet flows and platform operations.

Plans for ESMA to oversee major exchanges are facing pushback from smaller financial hubs, which are concerned about higher compliance costs and reduced competitiveness. Supporters argue that unified supervision is necessary to prevent regulatory gaps and reinforce market integrity across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Virginia sets new limits on AI chatbots for minors

Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.

Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.

Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.

Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.

Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.

Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.

Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.

Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.

The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia strengthens parent support for new social media age rules

Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.

eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.

The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.

Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.

A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.

Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.

Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

INQUBATOR set to build a competitive quantum ecosystem over four years

Germany has launched the INQUBATOR initiative to help companies, particularly SMEs, prepare for the industrial impact of quantum computing. The four-year programme offers structured support to firms facing high entry barriers and limited access to advanced technologies.

A central feature is affordable access to quantum systems from multiple vendors, paired with workshops and hands-on training. Companies can test algorithms, assess business relevance and adapt processes without investing in costly hardware or specialist infrastructure.

The project is coordinated by the Fraunhofer Institute for Applied Solid-State Physics and is funded by the Federal Ministry of Research and Technology. It brings together several Fraunhofer institutes to guide firms from early exploration to applied solutions.

Initial pilot projects span medicine, cybersecurity, insurance and automotive sectors. These examples are intended to demonstrate measurable advantages and will be followed by an open call for further use cases across a broader range of industries.

INQUBATOR aims to reduce financial and technical obstacles while expanding quantum expertise and industrial readiness in Germany. By enabling practical experimentation, it seeks to build a competitive ecosystem of quantum-literate companies over the next four years.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!