Social media ban for children gains momentum in Germany

Germany’s coalition government is weighing new restrictions on children’s access to social media as both governing parties draft proposals to tighten online safeguards. The debate comes amid broader economic pressures, with industry reporting significant job losses last year.

The conservative bloc and the centre-left Social Democrats are examining measures that could curb or block social media access for minors. Proposals under discussion include age-based restrictions and stronger platform accountability.

The Social Democrats in Germany have proposed banning access for children under 14 and introducing dedicated youth versions of platforms for users aged 14 to 16. Supporters argue that clearer age thresholds could reduce exposure to harmful content and addictive design features.

The discussions align with a growing European trend toward stricter digital child protection rules. Several governments are exploring tougher age verification and content moderation standards, reflecting mounting concerns over online safety and mental health.

The policy debate unfolded as German industry reported cutting 124,100 jobs in 2025 amid ongoing economic headwinds. Lawmakers face the dual challenge of safeguarding younger users while navigating wider structural pressures affecting Europe’s largest economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Government ramps up online safety for children in the UK

The UK government has announced new measures to protect children online, giving parents clearer guidance and support. PM Keir Starmer said no platform will get a free pass, with illegal AI chatbot content targeted immediately.

New powers, to be introduced through upcoming legislation, will allow swift action following a consultation on children’s digital well-being.

Proposed measures include enforcing social media age limits, restricting harmful features like infinite scrolling, and strengthening safeguards against sharing non-consensual intimate images.

Ministers are already consulting parents, children, and civil society groups. The Department for Science, Innovation and Technology launched ‘You Won’t Know until You Ask’ to advise on safety settings, talking to children, and handling harmful content.

Charities such as NSPCC and the Molly Rose Foundation welcomed the announcement, emphasising swift action on age limits, addictive design, and AI content regulation. Children’s feedback will help shape the new rules, aiming to make the UK a global leader in online safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin and Ethereum gains face new crypto tax under Dutch law

Dutch lawmakers have approved a new tax law that will impose a 36% levy on actual investment returns, including both realised and unrealised gains from cryptocurrencies such as Bitcoin and Ethereum.

The law, called the Actual Return in Box 3 Act, takes effect on 1 January 2028 and applies annually, meaning investors will owe tax even if assets are not sold.

Real estate and startup shares are exempt from mark-to-market taxation, raising concern among crypto investors. Critics say taxing paper gains may force investors to sell assets or consider moving to more favourable jurisdictions.

The government defended the measure as essential to prevent significant revenue losses.

The legislation includes some relief measures, such as a tax-free annual return for small savers and unlimited loss carry-forward above certain thresholds, allowing investors to offset downturns against future gains.

Despite these provisions, many crypto advocates argue that taxing unrealised gains remains problematic.

Crypto adoption in the Netherlands is growing rapidly. Indirect holdings by Dutch companies, institutions, and households reached $1.42 billion by October 2025, up from $96 million in 2020.

Officials say the long-term goal is to move towards a realised gains model, but annual taxation of paper gains is currently seen as necessary to safeguard public finances.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI features disabled on MEP tablets amid European Parliament security concerns

The European Parliament has disabled AI features on the tablets it provides to lawmakers, citing cybersecurity and data protection concerns. Built-in AI tools like writing and virtual assistants have been disabled, while third-party apps remain mostly unaffected.

The decision follows an assessment highlighting that some AI features send data to cloud services rather than processing it locally.

Lawmakers have been advised to take similar precautions on their personal devices. Guidance includes reviewing AI settings, disabling unnecessary features, and limiting app permissions to reduce exposure of work emails and documents.

Officials stressed that these measures are intended to prevent sensitive data from being inadvertently shared with service providers.

The move comes amid broader European scrutiny of reliance on overseas digital platforms, particularly US-based services. Concerns over data sovereignty and laws like the US Cloud Act have amplified fears that personal and sensitive information could be accessed by foreign authorities.

AI tools, which require extensive access to user data, have become a key focus in ongoing debates over digital security in the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Shein faces formal proceedings under EU Digital Services Act

The European Commission has opened formal proceedings against Shein under the Digital Services Act over addictive design and illegal product risks. The move follows preliminary reviews of company reports and responses to information requests. Officials said the decision does not prejudge the outcome.

Investigators will review safeguards to prevent illegal products being sold in the European Union, including items that could amount to child sexual abuse material, such as child-like sex dolls. Authorities will also assess how the platform detects and removes unlawful goods offered by third-party sellers.

The Commission will examine risks linked to platform design, including engagement-based rewards that may encourage excessive use. Officials will assess whether adequate measures are in place to limit potential harm to users’ well-being and ensure effective consumer protection online.

Transparency obligations under the DSA are another focal point. Platforms must clearly disclose the main parameters of their recommender systems and provide at least one easily accessible option that is not based on profiling. The Commission will assess whether Shein meets these requirements.

Coimisiún na Meán, the Digital Services Coordinator of Ireland, will assist the investigation as Ireland is Shein’s EU base. The Commission may seek more information or adopt interim measures if needed. Proceedings run alongside consumer protection action and product safety enforcement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EDPS urges stronger safeguards in EU temporary chat-scanning rules

Concerns over privacy safeguards have resurfaced as the European Data Protection Supervisor urges legislators to limit indiscriminate chat-scanning in the upcoming extension of temporary EU rules.

The supervisor warns that the current framework risks enabling broad surveillance instead of focusing on targeted action against criminal content.

The EU institutions are considering a short-term renewal of the interim regime governing the detection of online material linked to child protection.

Privacy officials argue that such measures need clearer boundaries and stronger oversight to ensure that automated scanning tools do not intrude on the communications of ordinary users.

EDPS is also pressing lawmakers to introduce explicit safeguards before any renewal is approved. These include tighter definitions of scanning methods, independent verification, and mechanisms that prevent the processing of unrelated personal data.

According to the supervisor, temporary legislation must not create long-term precedents that weaken confidentiality across messaging services.

The debate comes as the EU continues discussions on a wider regulatory package covering child-protection technologies, encryption and platform responsibilities.

Privacy authorities maintain that targeted tools can be more practical than blanket scanning, which they consider a disproportionate response.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Study says China AI governance not purely state-driven

New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.

A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.

Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.

China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.

Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quebec examines AI debt collection practices

Quebec’s financial regulator has opened a review into how AI tools are being used to collect consumer debt across the province. The Autorité des marchés financiers is examining whether automated systems comply with governance, privacy and fairness standards in Quebec.

Draft guidelines released in 2025 require institutions in Quebec to maintain registries of AI systems, conduct bias testing and ensure human oversight. Public consultations closed in November, with regulators stressing that automation must remain explainable and accountable.

Many debt collection platforms now rely on predictive analytics to tailor the timing, tone and frequency of messages sent to borrowers in Quebec. Regulators are assessing whether such personalisation risks undue pressure or opaque decision making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Security flaws expose ‘vibe-coding’ AI platform Orchids to easy hacking

BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.

A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.

Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.

The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.

Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI startup raises $100m to predict human behaviour

Artificial intelligence startup Simile has raised $100m to develop a model designed to predict human behaviour in commercial and corporate contexts. The funding round was led by Index Ventures with participation from Bain Capital Ventures and other investors.

The company is building a foundation model trained on interviews, transaction records and behavioural science research. Its AI simulations aim to forecast customer purchases and anticipate questions analysts may raise during earnings calls.

Simile says the technology could offer an alternative to traditional focus groups and market testing. Retail trials have included using the system to guide decisions on product placement and inventory.

Founded by Stanford-affiliated researchers, the startup recently emerged from stealth after months of development. Prominent AI figures, including Fei-Fei Li and Andrej Karpathy, joined the funding round as it seeks to scale predictive decision-making tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!