Dutch lawmakers have approved a new tax law that will impose a 36% levy on actual investment returns, including both realised and unrealised gains from cryptocurrencies such as Bitcoin and Ethereum.
The law, called the Actual Return in Box 3 Act, takes effect on 1 January 2028 and applies annually, meaning investors will owe tax even if assets are not sold.
Real estate and startup shares are exempt from mark-to-market taxation, raising concern among crypto investors. Critics say taxing paper gains may force investors to sell assets or consider moving to more favourable jurisdictions.
The government defended the measure as essential to prevent significant revenue losses.
The legislation includes some relief measures, such as a tax-free annual return for small savers and unlimited loss carry-forward above certain thresholds, allowing investors to offset downturns against future gains.
Despite these provisions, many crypto advocates argue that taxing unrealised gains remains problematic.
Crypto adoption in the Netherlands is growing rapidly. Indirect holdings by Dutch companies, institutions, and households reached $1.42 billion by October 2025, up from $96 million in 2020.
Officials say the long-term goal is to move towards a realised gains model, but annual taxation of paper gains is currently seen as necessary to safeguard public finances.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Parliament has disabled AI features on the tablets it provides to lawmakers, citing cybersecurity and data protection concerns. Built-in AI tools like writing and virtual assistants have been disabled, while third-party apps remain mostly unaffected.
The decision follows an assessment highlighting that some AI features send data to cloud services rather than processing it locally.
Lawmakers have been advised to take similar precautions on their personal devices. Guidance includes reviewing AI settings, disabling unnecessary features, and limiting app permissions to reduce exposure of work emails and documents.
Officials stressed that these measures are intended to prevent sensitive data from being inadvertently shared with service providers.
The move comes amid broader European scrutiny of reliance on overseas digital platforms, particularly US-based services. Concerns over data sovereignty and laws like the US Cloud Act have amplified fears that personal and sensitive information could be accessed by foreign authorities.
AI tools, which require extensive access to user data, have become a key focus in ongoing debates over digital security in the EU.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The European Commission has opened formal proceedings against Shein under the Digital Services Act over addictive design and illegal product risks. The move follows preliminary reviews of company reports and responses to information requests. Officials said the decision does not prejudge the outcome.
Investigators will review safeguards to prevent illegal products being sold in the European Union, including items that could amount to child sexual abuse material, such as child-like sex dolls. Authorities will also assess how the platform detects and removes unlawful goods offered by third-party sellers.
The Commission will examine risks linked to platform design, including engagement-based rewards that may encourage excessive use. Officials will assess whether adequate measures are in place to limit potential harm to users’ well-being and ensure effective consumer protection online.
Transparency obligations under the DSA are another focal point. Platforms must clearly disclose the main parameters of their recommender systems and provide at least one easily accessible option that is not based on profiling. The Commission will assess whether Shein meets these requirements.
Coimisiún na Meán, the Digital Services Coordinator of Ireland, will assist the investigation as Ireland is Shein’s EU base. The Commission may seek more information or adopt interim measures if needed. Proceedings run alongside consumer protection action and product safety enforcement.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Concerns over privacy safeguards have resurfaced as the European Data Protection Supervisor urges legislators to limit indiscriminate chat-scanning in the upcoming extension of temporary EU rules.
The supervisor warns that the current framework risks enabling broad surveillance instead of focusing on targeted action against criminal content.
The EU institutions are considering a short-term renewal of the interim regime governing the detection of online material linked to child protection.
Privacy officials argue that such measures need clearer boundaries and stronger oversight to ensure that automated scanning tools do not intrude on the communications of ordinary users.
EDPS is also pressing lawmakers to introduce explicit safeguards before any renewal is approved. These include tighter definitions of scanning methods, independent verification, and mechanisms that prevent the processing of unrelated personal data.
According to the supervisor, temporary legislation must not create long-term precedents that weaken confidentiality across messaging services.
The debate comes as the EU continues discussions on a wider regulatory package covering child-protection technologies, encryption and platform responsibilities.
Privacy authorities maintain that targeted tools can be more practical than blanket scanning, which they consider a disproportionate response.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.
A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.
Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.
China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.
Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Quebec’s financial regulator has opened a review into how AI tools are being used to collect consumer debt across the province. The Autorité des marchés financiers is examining whether automated systems comply with governance, privacy and fairness standards in Quebec.
Draft guidelines released in 2025 require institutions in Quebec to maintain registries of AI systems, conduct bias testing and ensure human oversight. Public consultations closed in November, with regulators stressing that automation must remain explainable and accountable.
Many debt collection platforms now rely on predictive analytics to tailor the timing, tone and frequency of messages sent to borrowers in Quebec. Regulators are assessing whether such personalisation risks undue pressure or opaque decision making.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.
A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.
Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.
The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.
Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Artificial intelligence startup Simile has raised $100m to develop a model designed to predict human behaviour in commercial and corporate contexts. The funding round was led by Index Ventures with participation from Bain Capital Ventures and other investors.
The company is building a foundation model trained on interviews, transaction records and behavioural science research. Its AI simulations aim to forecast customer purchases and anticipate questions analysts may raise during earnings calls.
Simile says the technology could offer an alternative to traditional focus groups and market testing. Retail trials have included using the system to guide decisions on product placement and inventory.
Founded by Stanford-affiliated researchers, the startup recently emerged from stealth after months of development. Prominent AI figures, including Fei-Fei Li and Andrej Karpathy, joined the funding round as it seeks to scale predictive decision-making tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Growing numbers of students are using AI chatbots such as ChatGPT to guide their college search, reshaping how institutions attract applicants. Surveys show nearly half of high school students now use artificial intelligence tools during the admissions process.
Unlike traditional search engines, generative AI provides direct answers rather than website links, keeping users within conversational platforms. That shift has prompted universities to focus on ‘AI visibility’, ensuring their information is accurately surfaced by chatbots.
Institutions are refining website content through answer engine optimisation to improve how AI systems interpret their programmes and values. Clear, updated data is essential, as generative models can produce errors or outdated responses.
College leaders see both opportunity and risk in the trend. While AI can help families navigate complex choices, advisers warn that trust, accuracy and the human element remain critical in higher education decision-making.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Portugal’s parliament has approved a draft law that would require parental consent for teenagers aged 13 to 16 to use social media, in a move aimed at strengthening online protections for minors. The proposal passed its first reading on Thursday and will now move forward in the legislative process, where it could still be amended before a final vote.
The bill is backed by the ruling Social Democratic Party (PSD), which argues that stricter rules are needed to shield young people from online risks. Lawmakers cited concerns over cyberbullying, exposure to harmful content, and contact with online predators as key reasons for tightening access.
Under the proposal, parents would have to grant permission through the public Digital Mobile Key system of Portugal. Social media companies would be required to introduce age verification mechanisms linked to this system to ensure that only authorised teenagers can create and maintain accounts.
The legislation also seeks to reinforce the enforcement of an existing ban prohibiting children under 13 from accessing social media platforms. Authorities believe the new measures would make it harder for younger users to bypass age limits.
The draft law was approved in its first reading by 148 votes to 69, with 13 abstentions. A PSD lawmaker warned that companies failing to comply with the new requirements could face fines of up to 2% of their global revenue, signalling that the government intends to enforce the new requirements seriously.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!