The European Commission has opened formal proceedings against Shein under the Digital Services Act over addictive design and illegal product risks. The move follows preliminary reviews of company reports and responses to information requests. Officials said the decision does not prejudge the outcome.
Investigators will review safeguards to prevent illegal products being sold in the European Union, including items that could amount to child sexual abuse material, such as child-like sex dolls. Authorities will also assess how the platform detects and removes unlawful goods offered by third-party sellers.
The Commission will examine risks linked to platform design, including engagement-based rewards that may encourage excessive use. Officials will assess whether adequate measures are in place to limit potential harm to users’ well-being and ensure effective consumer protection online.
Transparency obligations under the DSA are another focal point. Platforms must clearly disclose the main parameters of their recommender systems and provide at least one easily accessible option that is not based on profiling. The Commission will assess whether Shein meets these requirements.
Coimisiún na Meán, the Digital Services Coordinator of Ireland, will assist the investigation as Ireland is Shein’s EU base. The Commission may seek more information or adopt interim measures if needed. Proceedings run alongside consumer protection action and product safety enforcement.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Concerns over privacy safeguards have resurfaced as the European Data Protection Supervisor urges legislators to limit indiscriminate chat-scanning in the upcoming extension of temporary EU rules.
The supervisor warns that the current framework risks enabling broad surveillance instead of focusing on targeted action against criminal content.
The EU institutions are considering a short-term renewal of the interim regime governing the detection of online material linked to child protection.
Privacy officials argue that such measures need clearer boundaries and stronger oversight to ensure that automated scanning tools do not intrude on the communications of ordinary users.
EDPS is also pressing lawmakers to introduce explicit safeguards before any renewal is approved. These include tighter definitions of scanning methods, independent verification, and mechanisms that prevent the processing of unrelated personal data.
According to the supervisor, temporary legislation must not create long-term precedents that weaken confidentiality across messaging services.
The debate comes as the EU continues discussions on a wider regulatory package covering child-protection technologies, encryption and platform responsibilities.
Privacy authorities maintain that targeted tools can be more practical than blanket scanning, which they consider a disproportionate response.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.
A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.
Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.
China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.
Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Quebec’s financial regulator has opened a review into how AI tools are being used to collect consumer debt across the province. The Autorité des marchés financiers is examining whether automated systems comply with governance, privacy and fairness standards in Quebec.
Draft guidelines released in 2025 require institutions in Quebec to maintain registries of AI systems, conduct bias testing and ensure human oversight. Public consultations closed in November, with regulators stressing that automation must remain explainable and accountable.
Many debt collection platforms now rely on predictive analytics to tailor the timing, tone and frequency of messages sent to borrowers in Quebec. Regulators are assessing whether such personalisation risks undue pressure or opaque decision making.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.
A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.
Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.
The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.
Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Artificial intelligence startup Simile has raised $100m to develop a model designed to predict human behaviour in commercial and corporate contexts. The funding round was led by Index Ventures with participation from Bain Capital Ventures and other investors.
The company is building a foundation model trained on interviews, transaction records and behavioural science research. Its AI simulations aim to forecast customer purchases and anticipate questions analysts may raise during earnings calls.
Simile says the technology could offer an alternative to traditional focus groups and market testing. Retail trials have included using the system to guide decisions on product placement and inventory.
Founded by Stanford-affiliated researchers, the startup recently emerged from stealth after months of development. Prominent AI figures, including Fei-Fei Li and Andrej Karpathy, joined the funding round as it seeks to scale predictive decision-making tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Growing numbers of students are using AI chatbots such as ChatGPT to guide their college search, reshaping how institutions attract applicants. Surveys show nearly half of high school students now use artificial intelligence tools during the admissions process.
Unlike traditional search engines, generative AI provides direct answers rather than website links, keeping users within conversational platforms. That shift has prompted universities to focus on ‘AI visibility’, ensuring their information is accurately surfaced by chatbots.
Institutions are refining website content through answer engine optimisation to improve how AI systems interpret their programmes and values. Clear, updated data is essential, as generative models can produce errors or outdated responses.
College leaders see both opportunity and risk in the trend. While AI can help families navigate complex choices, advisers warn that trust, accuracy and the human element remain critical in higher education decision-making.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Portugal’s parliament has approved a draft law that would require parental consent for teenagers aged 13 to 16 to use social media, in a move aimed at strengthening online protections for minors. The proposal passed its first reading on Thursday and will now move forward in the legislative process, where it could still be amended before a final vote.
The bill is backed by the ruling Social Democratic Party (PSD), which argues that stricter rules are needed to shield young people from online risks. Lawmakers cited concerns over cyberbullying, exposure to harmful content, and contact with online predators as key reasons for tightening access.
Under the proposal, parents would have to grant permission through the public Digital Mobile Key system of Portugal. Social media companies would be required to introduce age verification mechanisms linked to this system to ensure that only authorised teenagers can create and maintain accounts.
The legislation also seeks to reinforce the enforcement of an existing ban prohibiting children under 13 from accessing social media platforms. Authorities believe the new measures would make it harder for younger users to bypass age limits.
The draft law was approved in its first reading by 148 votes to 69, with 13 abstentions. A PSD lawmaker warned that companies failing to comply with the new requirements could face fines of up to 2% of their global revenue, signalling that the government intends to enforce the new requirements seriously.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mortgage lenders face growing pressure to govern AI as regulatory uncertainty persists across the United States. States and federal authorities continue to contest oversight, but accountability for how AI is used in underwriting, servicing, marketing, and fraud detection already rests with lenders.
Effective AI risk management requires more than policy statements. Mortgage lenders need operational governance that inventories AI tools, documents training data, and assigns accountability for outcomes, including bias monitoring and escalation when AI affects borrower eligibility, pricing, or disclosures.
Vendor risk has become a central exposure. Many technology contracts predate AI scrutiny and lack provisions on audit rights, explainability, and data controls, leaving lenders responsible when third-party models fail regulatory tests or transparency expectations.
Leading US mortgage lenders are using staged deployments, starting with lower-risk use cases such as document processing and fraud detection, while maintaining human oversight for high-impact decisions. Incremental rollouts generate performance and fairness evidence that regulators increasingly expect.
Regulatory pressure is rising as states advance AI rules and federal authorities signal the development of national standards. Even as boundaries are debated, lenders remain accountable, making early governance and disciplined scaling essential.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.
Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.
A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.
Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.
The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.
Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!