Malicious Chrome extension siphons SOL from Solana swaps

Security researchers have uncovered a malicious Chrome extension that secretly diverts SOL from users conducting swaps on the Solana blockchain. The extension, called Crypto Copilot, injects an undisclosed transfer into every Raydium transaction, quietly routing funds to a hardcoded attacker wallet.

The tool presents itself as a convenience app that enables Solana swaps directly from X posts, connecting to wallets such as Phantom and Solflare. Behind the interface, the code appends a hidden SystemProgram.transfer instruction to each transaction.

The fee is set at either 0.0013 SOL or 0.05% of the trade amount, whichever is higher, and remains invisible unless the user inspects the complete instruction list.

External services lend the app legitimacy, utilising DexScreener data, Helius RPC calls, and a backend dashboard that provides no actual functionality. Researchers warn that the disposable infrastructure, misspelt domains, and obfuscated code point to clear malicious intent, not an unfinished product.

On-chain analysis indicates limited gains for attackers so far, likely due to the low distribution. The mechanism, however, scales directly with swap volume, placing high-frequency and large-volume traders at the most significant risk.

Security teams are urging users to avoid closed-source trading extensions and to scrutinise Solana transactions before signing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU prepares tougher oversight for crypto operators

EU regulators are preparing for a significant shift in crypto oversight as new rules take effect on 1 January 2026. Crypto providers must report all customer transactions and holdings in a uniform digital format, giving tax authorities broader visibility across the bloc.

The DAC8 framework brings mandatory cross-border data sharing, a centralised operator register and unique ID numbers for each reporting entity. These measures aim to streamline supervision and enhance transparency, even though data on delisted firms must be preserved for up to twelve months.

Privacy concerns are rising as the new rules expand the travel rule for transfers above €1,000 and introduce possible ownership checks on private wallets. Combined with MiCA and upcoming AML rules, regulators gain deeper insight into user behaviour, wallet flows and platform operations.

Plans for ESMA to oversee major exchanges are facing pushback from smaller financial hubs, which are concerned about higher compliance costs and reduced competitiveness. Supporters argue that unified supervision is necessary to prevent regulatory gaps and reinforce market integrity across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

New phishing kit targets Microsoft 365 users

Researchers have uncovered a large phishing operation, known as Quantum Route Redirect (QRR), that creates fake Microsoft 365 login pages across nearly 1,000 domains. The campaign uses convincing email lures, including DocuSign notices and payment alerts, to steal user credentials.

QRR operations have reached 90 countries, with US users hit hardest. Analysts say the platform evades scanners by sending bots to safe pages while directing real individuals to credential-harvesting sites on compromised domains.

The kit emerged shortly after Microsoft disrupted the RaccoonO365 network, which had stolen thousands of accounts. Similar tools, such as VoidProxy and Darcula, have appeared; yet, QRR stands out for its automation and ease of use, which enable rapid, large-scale attacks.

Cybersecurity experts warn that URL scanning alone can no longer stop such operations. Organisations are urged to adopt layered protection, stronger sign-in controls and behavioural monitoring to detect scams that increasingly mimic genuine Microsoft systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Family warns others after crypto scam costs elderly man £3,000

A South Tyneside family has spoken publicly after an elderly man lost almost £3,000 to a highly persuasive cryptocurrency scam, according to a recent BBC report. The scammer contacted the victim repeatedly over several weeks, initially offering help with online banking before shifting to an ‘investment opportunity’.

According to the family, the caller built trust by using personal details, even fabricating a story about ‘free Bitcoin’ awarded to the man years earlier.

Police said the scam fits a growing trend of crypto-related fraud. The victim, under the scammer’s guidance, opened multiple new bank accounts and was eventually directed to transfer nearly £3,000 into a Coinbase-linked crypto wallet.

Attempts by the family to recover the funds were unsuccessful. Coinbase said it advises users to research any investment carefully and provides guidance on recognising scams.

Northumbria Police and national fraud agencies have been alerted. Officers said crypto scams present particular challenges because, unlike traditional banking fraud, the transferred funds are far harder to trace.

Community groups in Sunderland, such as Pallion Action Group, are now running sessions to educate older residents about online threats, noting that rapid changes in technology can make such scams especially daunting for pensioners.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Online platforms face new EU duties on child protection

The EU member states have endorsed a position for new rules to counter child sexual abuse online. The plan introduces duties for digital services to prevent the spread of abusive material. It also creates an EU Centre to coordinate enforcement and support national authorities.

Service providers must assess how their platforms could be misused and apply mitigation measures. These may include reporting tools, stronger privacy defaults for minors, and controls over shared content. National authorities will review these steps and can order additional action where needed.

A three-tier risk system will categorise services as high, medium, or low risk. High-risk platforms may be required to help develop protective technologies. Providers that fail to comply with obligations could face financial penalties under the regulation.

Victims will be able to request the removal or disabling of abusive material depicting them. The EU Centre will verify provider responses and maintain a database to manage reports. It will also share relevant information with Europol and law enforcement bodies.

The Council supports extending voluntary scanning for abusive content beyond its current expiry. Negotiations with the European Parliament will now begin on the final text. The Parliament adopted its position in 2023 and will help decide the Centre’s location.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Virginia sets new limits on AI chatbots for minors

Lawmakers in Virginia are preparing fresh efforts to regulate AI as concerns grow over its influence on minors and vulnerable users.

Legislators will return in January with a set of proposals focused on limiting the capabilities of chatbots, curbing deepfakes and restricting automated ticket-buying systems. The push follows a series of failed attempts last year to define high-risk AI systems and expand protections for consumers.

Delegate Michelle Maldonado aims to introduce measures that restrict what conversational agents can say in therapeutic interactions instead of allowing them to mimic emotional support.

Her plans follow the well-publicised case of a sixteen-year-old who discussed suicidal thoughts with a chatbot before taking his own life. She argues that young people rely heavily on these tools and need stronger safeguards that recognise dangerous language and redirect users towards human help.

Maldonado will also revive a previous bill on high-risk AI, refining it to address particular sectors rather than broad categories.

Delegate Cliff Hayes is preparing legislation to require labels for synthetic media and to block AI systems from buying event tickets in bulk instead of letting automated tools distort prices.

Hayes already secured a law preventing predictions from AI tools from being the sole basis for criminal justice decisions. He warns that the technology has advanced too quickly for policy to remain passive and urges a balance between innovation and protection.

Proposals that come as the state continues to evaluate its regulatory environment under an executive order issued by Governor Glenn Youngkin.

The order directs AI systems to scan the state code for unnecessary or conflicting rules, encouraging streamlined governance instead of strict statutory frameworks. Observers argue that human oversight remains essential as legislators search for common ground on how far to extend regulatory control.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia strengthens parent support for new social media age rules

Yesterday, Australia entered a new phase of its online safety framework after the introduction of the Social Media Minimum Age policy.

eSafety has established a new Parent Advisory Group to support families as the country transitions to enhanced safeguards for young people. The group held its first meeting, with the Commissioner underlining the need for practical and accessible guidance for carers.

The initiative brings together twelve organisations representing a broad cross-section of communities in Australia, including First Nations families, culturally diverse groups, parents of children with disability and households in regional areas.

Their role is to help eSafety refine its approach, so parents can navigate social platforms with greater confidence, rather than feeling unsupported during rapid regulatory change.

A group that will advise on parent engagement, offer evidence-informed insights and test updated resources such as the redeveloped Online Safety Parent Guide.

Their advice will aim to ensure materials remain relevant, inclusive and able to reach priority communities that often miss out on official communications.

Members will serve voluntarily until June 2026 and will work with eSafety to improve distribution networks and strengthen the national conversation on digital literacy. Their collective expertise is expected to shape guidance that reflects real family experiences instead of abstract policy expectations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ecuador and Latin America expand skills in ethical AI with UNESCO training

UNESCO is strengthening capacities in AI ethics and regulation across Ecuador and Latin America through two newly launched courses. The initiatives aim to enhance digital governance and ensure the ethical use of AI in the region.

The first course, ‘Regulation of Artificial Intelligence: A View from and towards Latin America,’ is taking place virtually from 19 to 28 November 2025.

Organised by UNESCO’s Social and Human Sciences Sector in coordination with UNESCO-Chile and CTS Lab at FLACSO Ecuador, the programme involves 30 senior officials from key institutions, including the Ombudsman’s Office and the Superintendency for Personal Data Protection.

Participants are trained on AI ethical principles, risks, and opportunities, guided by UNESCO’s 2021 Recommendation on the Ethics of AI.

The ‘Ethical Use of AI’ course starts next week for telecom and electoral officials. The 20-hour hybrid programme teaches officials to use UNESCO’s RAM to assess readiness and plan ethical AI strategies.

UNESCO aims to train 60 officials and strengthen AI ethics and regulatory frameworks in Ecuador and Chile. The programmes reflect a broader commitment to building inclusive, human-rights-oriented digital governance in Latin America.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Character AI blocks teen chat and introduces new interactive Stories feature

A new feature called ‘Stories’ from Character.AI allows users under 18 to create interactive fiction with their favourite characters. The move replaces open-ended chatbot access, which has been entirely restricted for minors amid concerns over mental health risks.

Open-ended AI chatbots can initiate conversations at any time, raising worries about overuse and addiction among younger users.

Several lawsuits against AI companies have highlighted the dangers, prompting Character.AI to phase out access for minors and introduce a guided, safety-focused alternative.

Industry observers say the Stories feature offers a safer environment for teens to engage with AI characters while continuing to explore creative content.

The decision aligns with recent AI regulations in California and ongoing US federal proposals to limit minors’ exposure to interactive AI companions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MEPs call for stronger online protection for children

The European Parliament is urging stronger EU-wide measures to protect minors online, calling for a harmonised minimum age of 16 for accessing social media, video-sharing platforms, and AI companions. Under the proposal, children aged 13 to 16 would only be allowed to join such platforms with their parents’ consent.

MEPs say the move responds to growing concerns about the impact of online environments on young people’s mental health, attention span, and exposure to manipulative design practices.

The report, adopted by a large majority of MEPs, also calls for stricter enforcement of existing EU rules and greater accountability from tech companies. Lawmakers seek accurate, privacy-preserving age verification tools, including the forthcoming EU age-verification app and the European digital identity wallet.

They also propose making senior managers personally liable in cases of serious, repeated breaches, especially when platforms fail to implement adequate protections for minors.

Beyond age limits, Parliament is calling for sweeping restrictions on harmful features that fuel digital addiction. That includes banning practices such as infinite scrolling, autoplay, reward loops, and dark patterns for minors, as well as prohibiting non-compliant websites altogether.

MEPs also want engagement-based recommendation systems and randomised gaming mechanics like loot boxes outlawed for children, alongside tighter controls on influencer marketing, targeted ads, and commercial exploitation through so-called ‘kidfluencing.’

The report highlights growing public concern, as most Europeans view protecting children online as an urgent priority amid rising rates of problematic smartphone use among teenagers. Rapporteur Christel Schaldemose said the measures mark a turning point, signalling that platforms can no longer treat children as test subjects.

‘The experiment ends here,’ she said, urging consistent enforcement of the Digital Services Act to ensure safer digital spaces for Europe’s youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot