New research challenges the view that China’s AI controls are solely the product of authoritarian rule, arguing instead that governance emerges from interaction between the state, private sector and society.
A study by Xuechen Chen of Northeastern University London and Lu Xu of Lancaster University argues that China’s AI governance is not purely top-down. Published in the Computer Law & Security Review, it says safeguards are shaped by regulators, companies and social actors, not only the central government.
Chen calls claims that Beijing’s AI oversight is entirely state-driven a ‘stereotypical narrative’. Although the Cyberspace Administration of China leads regulation, firms such as ByteDance and DeepSeek help shape guardrails through self-regulation and commercial strategy.
China was the first country to introduce rules specific to generative AI. Systems must avoid unlawful or vulgar content, and updated legislation strengthens minor protection, limiting children’s online activity and requiring child-friendly device modes.
Market incentives also reinforce compliance. As Chinese AI firms expand globally, consumer expectations and cultural norms encourage content moderation. The study concludes that governance reflects interaction between state authority, market forces and society.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Quebec’s financial regulator has opened a review into how AI tools are being used to collect consumer debt across the province. The Autorité des marchés financiers is examining whether automated systems comply with governance, privacy and fairness standards in Quebec.
Draft guidelines released in 2025 require institutions in Quebec to maintain registries of AI systems, conduct bias testing and ensure human oversight. Public consultations closed in November, with regulators stressing that automation must remain explainable and accountable.
Many debt collection platforms now rely on predictive analytics to tailor the timing, tone and frequency of messages sent to borrowers in Quebec. Regulators are assessing whether such personalisation risks undue pressure or opaque decision making.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
BBC technology reporting reveals that Orchids, a popular ‘vibe-coding’ platform designed to let users build applications through simple text prompts and AI-assisted generation, contains serious, unresolved security weaknesses that could let a malicious actor breach accounts and tamper with code or data.
A cybersecurity researcher demonstrated that the platform’s authentication and input handling mechanisms can be exploited, allowing unauthorised access to projects and potentially enabling attackers to insert malicious code or exfiltrate sensitive information.
Because Orchids abstracts conventional coding into natural-language prompts and shared project spaces, the risk surface for such vulnerabilities is larger than in traditional development environments.
The report underscores broader concerns in the AI developer ecosystem: as AI-driven tools lower technical barriers, they also bring new security challenges when platforms rush to innovate without fully addressing fundamental safeguards such as secure authentication, input validation and permission controls.
Experts cited in the article urge industry and regulators to prioritise robust security testing and clear accountability when deploying AI-assisted coding systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Artificial intelligence startup Simile has raised $100m to develop a model designed to predict human behaviour in commercial and corporate contexts. The funding round was led by Index Ventures with participation from Bain Capital Ventures and other investors.
The company is building a foundation model trained on interviews, transaction records and behavioural science research. Its AI simulations aim to forecast customer purchases and anticipate questions analysts may raise during earnings calls.
Simile says the technology could offer an alternative to traditional focus groups and market testing. Retail trials have included using the system to guide decisions on product placement and inventory.
Founded by Stanford-affiliated researchers, the startup recently emerged from stealth after months of development. Prominent AI figures, including Fei-Fei Li and Andrej Karpathy, joined the funding round as it seeks to scale predictive decision-making tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Growing numbers of students are using AI chatbots such as ChatGPT to guide their college search, reshaping how institutions attract applicants. Surveys show nearly half of high school students now use artificial intelligence tools during the admissions process.
Unlike traditional search engines, generative AI provides direct answers rather than website links, keeping users within conversational platforms. That shift has prompted universities to focus on ‘AI visibility’, ensuring their information is accurately surfaced by chatbots.
Institutions are refining website content through answer engine optimisation to improve how AI systems interpret their programmes and values. Clear, updated data is essential, as generative models can produce errors or outdated responses.
College leaders see both opportunity and risk in the trend. While AI can help families navigate complex choices, advisers warn that trust, accuracy and the human element remain critical in higher education decision-making.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Portugal’s parliament has approved a draft law that would require parental consent for teenagers aged 13 to 16 to use social media, in a move aimed at strengthening online protections for minors. The proposal passed its first reading on Thursday and will now move forward in the legislative process, where it could still be amended before a final vote.
The bill is backed by the ruling Social Democratic Party (PSD), which argues that stricter rules are needed to shield young people from online risks. Lawmakers cited concerns over cyberbullying, exposure to harmful content, and contact with online predators as key reasons for tightening access.
Under the proposal, parents would have to grant permission through the public Digital Mobile Key system of Portugal. Social media companies would be required to introduce age verification mechanisms linked to this system to ensure that only authorised teenagers can create and maintain accounts.
The legislation also seeks to reinforce the enforcement of an existing ban prohibiting children under 13 from accessing social media platforms. Authorities believe the new measures would make it harder for younger users to bypass age limits.
The draft law was approved in its first reading by 148 votes to 69, with 13 abstentions. A PSD lawmaker warned that companies failing to comply with the new requirements could face fines of up to 2% of their global revenue, signalling that the government intends to enforce the new requirements seriously.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mortgage lenders face growing pressure to govern AI as regulatory uncertainty persists across the United States. States and federal authorities continue to contest oversight, but accountability for how AI is used in underwriting, servicing, marketing, and fraud detection already rests with lenders.
Effective AI risk management requires more than policy statements. Mortgage lenders need operational governance that inventories AI tools, documents training data, and assigns accountability for outcomes, including bias monitoring and escalation when AI affects borrower eligibility, pricing, or disclosures.
Vendor risk has become a central exposure. Many technology contracts predate AI scrutiny and lack provisions on audit rights, explainability, and data controls, leaving lenders responsible when third-party models fail regulatory tests or transparency expectations.
Leading US mortgage lenders are using staged deployments, starting with lower-risk use cases such as document processing and fraud detection, while maintaining human oversight for high-impact decisions. Incremental rollouts generate performance and fairness evidence that regulators increasingly expect.
Regulatory pressure is rising as states advance AI rules and federal authorities signal the development of national standards. Even as boundaries are debated, lenders remain accountable, making early governance and disciplined scaling essential.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.
Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.
A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.
Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.
The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.
Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.
Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.
Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.
They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.
Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.
Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.
The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.
A clear stance from the Parliament is still pending, rather than an assured path toward agreement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In Houston, more than 200 students from across the US gathered to discuss the future of AI in schools. The event, organised by the Close Up Foundation and Stanford University’s Deliberative Democracy Lab, brought together participants from 39 schools in 19 states.
Students debated whether AI tools such as ChatGPT and Gemini support or undermine learning. Many argued that schools are introducing powerful systems before pupils develop core critical thinking skills.
Participants did not call for a total ban or full embrace of AI. Instead, they urged schools to delay exposure for younger pupils and introduce clearer classroom policies that distinguish between support and substitution.
After returning to Honolulu, a student from ʻIolani School said Hawaiʻi schools should involve students directly in AI policy decisions. In Honolulu and beyond, he argued that structured dialogue can help schools balance innovation with cognitive development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!