Efforts to reform US cryptocurrency regulation have hit another delay, as Senate senators pushed back the crucial markup of the CLARITY Act. The vote has been moved to the last week of January to secure bipartisan support.
Disagreements persist over stablecoin rewards, DeFi regulation, and regulatory authority between the SEC and CFTC. Without sufficient support, the bill risks stalling in committee and losing momentum for the year.
The CLARITY Act aims to bring structure to the US digital asset landscape, clarifying which tokens are classed as securities or commodities and expanding the CFTC’s supervisory role. It sets rules for market oversight and asset handling, providing legal clarity beyond the current enforcement-focused system.
The House passed its version in mid-2025, but the Senate has yet to agree on wording acceptable to all stakeholders. Delaying the markup gives Senate leaders time to refine the bill and rebuild support for potential 2026 reform.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Fintech investment platform Betterment has confirmed a data breach after hackers gained unauthorised access to parts of its internal systems and exposed personal customer information.
The incident occurred on 9 January and involved a social engineering attack connected to third-party platforms used for marketing and operational purposes.
The company said the compromised data included customer names, email and postal addresses, phone numbers and dates of birth.
No passwords or account login credentials were accessed, according to Betterment, which stressed that customer investment accounts were not breached.
Using the limited system access, attackers sent fraudulent notifications to some users promoting a crypto-related scam.
Customers were advised to ignore the messages instead of engaging with the request, while Betterment moved quickly to revoke the unauthorised access and begin a formal investigation with external cybersecurity support.
Betterment has not disclosed how many users were affected and has yet to provide further technical details. Representatives did not respond to requests for comment at the time of publication, while the company said outreach to impacted customers remains ongoing.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Consumer hardware is becoming more deeply embedded with AI as robot vacuum cleaners evolve from simple automated devices into intelligent household assistants.
New models rely on multimodal perception and real-time decision-making, instead of fixed cleaning routes, allowing them to adapt to complex domestic environments.
Advanced AI systems now enable robot vacuums to recognise obstacles, optimise cleaning sequences and respond to natural language commands. Technologies such as visual recognition and mapping algorithms support adaptive behaviour, improving efficiency while reducing manual input from users.
Market data reflects the shift towards intelligence-led growth.
Global shipments of smart robot vacuums increased by 18.7 percent during the first three quarters of 2025, with manufacturers increasingly competing on intelligent experience rather than suction power, as integration with smart home ecosystems accelerates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.
Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.
The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.
X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.
eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.
Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.
Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.
Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.
A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.
Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.
Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.
Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.
Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The EU has agreed to open talks with the US on sharing sensitive traveller data. The discussions aim to preserve visa-free travel for European citizens.
The proposal is called ‘Enhanced Border Security Partnership‘, and it could allow transfers of biometric data and other sensitive personal information. Legal experts warn that unclear limits may widen access beyond travellers alone.
EU governments have authorised the European Commission to negotiate a shared framework. Member states would later settle details through bilateral agreements with Washington.
Academics and privacy advocates are calling for stronger safeguards and transparency. EU officials insist data protection limits will form part of any final agreement.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A US teenager targeted by explicit deepfake images has helped create a new training course. The programme aims to support students, parents and school staff facing online abuse.
The course explains how AI tools are used to create sexualised fake images. It also outlines legal rights, reporting steps and available victim support resources.
Research shows deepfake abuse is spreading among teenagers, despite stronger laws. One in eight US teens know someone targeted by non-consensual fake images.
Developers say education remains critical as AI tools become easier to access. Schools are encouraged to adopt training to protect students and prevent harm.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google is expanding shopping features inside its Gemini chatbot through partnerships with Walmart and other retailers. Users will be able to browse and buy products without leaving the chat interface.
An instant checkout function allows purchases through linked accounts and selected payment providers. Walmart customers can receive personalised recommendations based on previous shopping activity.
The move was announced at the latest National Retail Federation convention in New York. Tech groups are racing to turn AI assistants into end-to-end retail tools.
Google said the service will launch first in the US before international expansion. Payments initially rely on Google-linked cards, with PayPal support planned.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Canopy Healthcare, one of New Zealand’s largest private medical oncology providers, has disclosed a data breach affecting patient and staff information, six months after the incident occurred.
The company said an unauthorised party accessed part of its administration systems on 18 July 2025, copying a ‘small’ amount of data. Affected information may include patient records, passport details, and some bank account numbers.
Canopy said it remains unclear exactly which individuals were impacted and what data was taken, adding that no evidence has emerged of the information being shared or published online.
Patients began receiving notifications in December 2025, prompting criticism over the delay. One affected patient said they were unhappy to learn about the breach months after it happened.
The New Zealand company said it notified police and the Privacy Commissioner at the time, secured a High Court injunction to prevent misuse of the data, and confirmed that its medical services continue to operate normally.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Luxembourg has hosted its largest national cyber defence exercise, Cyber Fortress, bringing together military and civilian specialists to practise responding to real-time cyberattacks on digital systems.
Since its launch in 2021, Cyber Fortress has evolved beyond a purely technical drill. The exercise now includes a realistic fictional scenario supported by media injections, creating a more immersive and practical training environment for participants.
This year’s edition expanded its international reach, with teams joining from Belgium, Latvia, Malta and the EU Cyber Rapid Response Teams. Around 100 participants also took part from a parallel site in Latvia, working alongside Luxembourg-based teams.
The exercise focuses on interoperability during cyber crises. Participants respond to multiple simulated attacks while protecting critical services, including systems linked to drone operations and other sensitive infrastructure.
Cyber Fortress now covers technical, procedural and management aspects of cyber defence. A new emphasis on disinformation, deepfakes and fake news reflects the growing importance of information warfare.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!