Portugal’s parliament has approved a draft law that would require parental consent for teenagers aged 13 to 16 to use social media, in a move aimed at strengthening online protections for minors. The proposal passed its first reading on Thursday and will now move forward in the legislative process, where it could still be amended before a final vote.
The bill is backed by the ruling Social Democratic Party (PSD), which argues that stricter rules are needed to shield young people from online risks. Lawmakers cited concerns over cyberbullying, exposure to harmful content, and contact with online predators as key reasons for tightening access.
Under the proposal, parents would have to grant permission through the public Digital Mobile Key system of Portugal. Social media companies would be required to introduce age verification mechanisms linked to this system to ensure that only authorised teenagers can create and maintain accounts.
The legislation also seeks to reinforce the enforcement of an existing ban prohibiting children under 13 from accessing social media platforms. Authorities believe the new measures would make it harder for younger users to bypass age limits.
The draft law was approved in its first reading by 148 votes to 69, with 13 abstentions. A PSD lawmaker warned that companies failing to comply with the new requirements could face fines of up to 2% of their global revenue, signalling that the government intends to enforce the new requirements seriously.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Mortgage lenders face growing pressure to govern AI as regulatory uncertainty persists across the United States. States and federal authorities continue to contest oversight, but accountability for how AI is used in underwriting, servicing, marketing, and fraud detection already rests with lenders.
Effective AI risk management requires more than policy statements. Mortgage lenders need operational governance that inventories AI tools, documents training data, and assigns accountability for outcomes, including bias monitoring and escalation when AI affects borrower eligibility, pricing, or disclosures.
Vendor risk has become a central exposure. Many technology contracts predate AI scrutiny and lack provisions on audit rights, explainability, and data controls, leaving lenders responsible when third-party models fail regulatory tests or transparency expectations.
Leading US mortgage lenders are using staged deployments, starting with lower-risk use cases such as document processing and fraud detection, while maintaining human oversight for high-impact decisions. Incremental rollouts generate performance and fairness evidence that regulators increasingly expect.
Regulatory pressure is rising as states advance AI rules and federal authorities signal the development of national standards. Even as boundaries are debated, lenders remain accountable, making early governance and disciplined scaling essential.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Officials in Russia have confirmed that no plans are underway to restrict access to Google, despite recent public debate about the possibility of a technical block. Anton Gorelkin, a senior lawmaker, said regulators clarified that such a step is not being considered.
Concerns centre on the impact a ban would have on devices running Android, which are used by a significant share of smartphone owners in the country.
A block on Google would disrupt essential digital services instead of encouraging the company to resolve ongoing legal disputes involving unpaid fines.
Gorelkin noted that court proceedings abroad are still in progress, meaning enforcement options remain open. He added that any future move to reduce reliance on Google services should follow a gradual pathway supported by domestic technological development rather than abrupt restrictions.
The comments follow earlier statements from another lawmaker, Andrey Svintsov, who acknowledged that blocking Google in Russia is technically feasible but unnecessary.
Officials now appear focused on creating conditions that would allow local digital platforms to grow without destabilising existing infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Lawmakers in the European Parliament remain divided over whether a direct ban on AI-driven ‘pornification’ should be added to the emerging digital omnibus.
Left-wing members push for an explicit prohibition, arguing that synthetic sexual imagery generated without consent has created a rapidly escalating form of online abuse. They say a strong legal measure is required instead of fragmented national responses.
Centre and liberal groups take a different position by promoting lighter requirements for industrial AI and seeking clarity on how any restrictions would interact with the AI Act.
They warn that an unrefined ban could spill over into general-purpose models and complicate enforcement across the European market. Their priority is a more predictable regulatory environment for companies developing high-volume AI systems.
Key figures across the political spectrum, including lawmakers such as Assita Kanko, Axel Voss and Brando Benifei, continue to debate how far the omnibus should go.
Some argue that safeguarding individuals from non-consensual sexual deepfakes must outweigh concerns about administrative burdens, while others insist that proportionality and technical feasibility need stronger assessment.
The lack of consensus leaves the proposal in a delicate phase as negotiations intensify. Lawmakers now face growing public scrutiny over how Europe will respond to the misuse of generative AI.
A clear stance from the Parliament is still pending, rather than an assured path toward agreement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
In Houston, more than 200 students from across the US gathered to discuss the future of AI in schools. The event, organised by the Close Up Foundation and Stanford University’s Deliberative Democracy Lab, brought together participants from 39 schools in 19 states.
Students debated whether AI tools such as ChatGPT and Gemini support or undermine learning. Many argued that schools are introducing powerful systems before pupils develop core critical thinking skills.
Participants did not call for a total ban or full embrace of AI. Instead, they urged schools to delay exposure for younger pupils and introduce clearer classroom policies that distinguish between support and substitution.
After returning to Honolulu, a student from ʻIolani School said Hawaiʻi schools should involve students directly in AI policy decisions. In Honolulu and beyond, he argued that structured dialogue can help schools balance innovation with cognitive development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cryptocurrency flows linked to suspected human trafficking services surged sharply in 2025, with transaction volumes rising 85% year-on-year, according to new blockchain analysis.
Investigators say the financial activity reflects the rapid expansion of digitally enabled exploitation networks operating across borders.
Growth is linked to Southeast Asia-based illicit networks, including scam compounds, gambling platforms, and laundering groups operating via encrypted messaging channels.
Analysts identified multiple trafficking service categories, each with distinct transaction structures and payment preferences.
Stablecoins became the dominant payment method, especially for escort networks, thanks to their price stability and ease of conversion. Larger transfers and structured pricing models indicate increasingly professionalised operations supported by organised financial infrastructure.
Despite the scale of the activity, blockchain transparency continues to provide enforcement advantages. Transaction tracing has aided investigations, shutdowns, and arrests, strengthening digital forensics in combating trafficking-linked financial crime.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European Union officials are weighing a sweeping prohibition on cryptocurrency transactions involving Russia, signalling a more rigid sanctions posture against alternative financial networks.
Policymakers argue that the rapid emergence of replacement crypto service providers has undermined existing restrictions.
Internal European Commission discussions indicate concern that digital assets are facilitating trade flows supporting Russia’s war economy. Authorities say platform-specific sanctions are ineffective, as new entities quickly replicate restricted services.
Proposals under review extend beyond private crypto platforms. Measures could include sanctions on additional Russian banks, restrictions linked to the digital ruble, and scrutiny of payments infrastructure tied to sanctioned trade channels.
The consensus remains uncertain, with some states warning that a blanket ban could shift activity to non-European markets. Parallel trade controls targeting dual-use exports to Kyrgyzstan are also being considered as part of broader anti-circumvention efforts.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has begun rolling out advertising inside ChatGPT, marking a shift for a service that has largely operated without traditional ads since its launch in 2022.
OpenAI said it is testing ads for logged-in Free and Go users in the United States, while paid tiers remain ad-free. The company said the test aims to fund broader access to advanced AI tools.
Ads appear outside ChatGPT responses and are clearly labelled as sponsored content, with no influence on answers. Placement is based on broad topics, with restrictions around sensitive areas such as health or politics.
Free users can opt out of ads by upgrading to a paid plan or by accepting fewer daily free messages in exchange for an ad-free experience. Users who allow ads can also opt out of ad personalisation, prevent past chats from being used for ad selection, and delete all ad-related history and data.
The rollout follows months of speculation after screenshots suggested that ads appeared in ChatGPT responses, which OpenAI described as suggestions. Rivals, including Anthropic, have contrasted their approach, promoting Claude as free from in-chat advertising.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Brazil has ordered X to immediately stop its chatbot Grok from generating sexually explicit images, escalating international pressure on the platform over the misuse of generative AI tools.
The order, issued on 11 February by Brazil’s National Data Protection Agency and National Consumer Rights Bureau, requires X to prevent the creation of sexualised content involving children, adolescents, or non-consenting adults. Authorities gave the company five days to comply or face legal action and fines.
Officials in Brazil said X claimed to have removed thousands of posts and suspended hundreds of accounts after a January warning. However, follow-up checks found Grok users were still able to generate sexualised deepfakes. Regulators criticised the platform for a lack of transparency in its response.
The move follows growing scrutiny after Indonesia blocked Grok in January, while the UK and France signalled continued pressure. Concerns increased after Grok’s ‘spicy mode’ enabled users to generate explicit images using simple prompts.
According to the Centre for Countering Digital Hate, Grok generated millions of sexualised images within days. X and its parent company, xAI, announced measures in mid-January to restrict such outputs in certain jurisdictions, but regulators said it remains unclear where those safeguards apply.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Cybercriminals are increasingly abusing legitimate administrative software to access corporate networks, making malicious activity harder to detect. Attackers are blending into normal operations by relying on trusted workforce and IT management tools rather than custom malware.
Recent campaigns have repurposed ‘Net Monitor for Employees Professional’ and ‘SimpleHelp’, tools usually used for staff oversight and remote support. Screen viewing, file management, and command features were exploited to control systems without triggering standard security alerts.
Researchers at Huntress identified the activity in early 2026, finding that the tools were used to maintain persistent, hidden access. Analysis showed that attackers were actively preparing compromised systems for follow-on attacks rather than limiting their activity to surveillance.
The access was later linked to attempts to deploy ‘Crazy’ ransomware and steal cryptocurrency, with intruders disguising the software as legitimate Microsoft services. Monitoring agents were often renamed to resemble standard cloud processes, thereby remaining active without attracting attention.
Huntress advised organisations to limit software installation rights, enforce multi-factor authentication, and audit networks for unauthorised management tools. Monitoring for antivirus tampering and suspicious program names remains critical for early detection.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!