US crypto regulation faces new delays

Efforts to reform US cryptocurrency regulation have hit another delay, as Senate senators pushed back the crucial markup of the CLARITY Act. The vote has been moved to the last week of January to secure bipartisan support.

Disagreements persist over stablecoin rewards, DeFi regulation, and regulatory authority between the SEC and CFTC. Without sufficient support, the bill risks stalling in committee and losing momentum for the year.

The CLARITY Act aims to bring structure to the US digital asset landscape, clarifying which tokens are classed as securities or commodities and expanding the CFTC’s supervisory role. It sets rules for market oversight and asset handling, providing legal clarity beyond the current enforcement-focused system.

The House passed its version in mid-2025, but the Senate has yet to agree on wording acceptable to all stakeholders. Delaying the markup gives Senate leaders time to refine the bill and rebuild support for potential 2026 reform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Betterment confirms data breach after social engineering attack

Fintech investment platform Betterment has confirmed a data breach after hackers gained unauthorised access to parts of its internal systems and exposed personal customer information.

The incident occurred on 9 January and involved a social engineering attack connected to third-party platforms used for marketing and operational purposes.

The company said the compromised data included customer names, email and postal addresses, phone numbers and dates of birth.

No passwords or account login credentials were accessed, according to Betterment, which stressed that customer investment accounts were not breached.

Using the limited system access, attackers sent fraudulent notifications to some users promoting a crypto-related scam.

Customers were advised to ignore the messages instead of engaging with the request, while Betterment moved quickly to revoke the unauthorised access and begin a formal investigation with external cybersecurity support.

Betterment has not disclosed how many users were affected and has yet to provide further technical details. Representatives did not respond to requests for comment at the time of publication, while the company said outreach to impacted customers remains ongoing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI reshapes Europe’s labour market outlook

European labour markets are showing clear signs of cooling after a brief period of employee leverage during the pandemic.

Slower industrial growth, easing wage momentum and increased adoption of AI are encouraging firms to limit hiring instead of expanding headcounts, while workers are becoming more cautious about changing jobs.

Economic indicators suggest employment growth across the EU will slow over the coming years, with fewer vacancies and stabilising migration flows reducing labour market dynamism.

Germany, France, the UK and several central and eastern European economies are already reporting higher unemployment expectations, particularly in manufacturing sectors facing high energy costs and weaker global demand.

Despite broader caution, labour shortages persist in specific areas such as healthcare, logistics, engineering and specialised technical roles.

Southern European countries benefiting from tourism and services growth continue to generate jobs, highlighting uneven recovery patterns instead of a uniform downturn across the continent.

Concerns about automation are further shaping behaviour, as surveys indicate growing anxiety over AI reshaping roles rather than eliminating work.

Analysts expect AI to transform job structures and skill requirements, prompting workers and employers alike to prioritise adaptability instead of rapid expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Robot vacuum market grows as AI becomes central to cleaning technology

Consumer hardware is becoming more deeply embedded with AI as robot vacuum cleaners evolve from simple automated devices into intelligent household assistants.

New models rely on multimodal perception and real-time decision-making, instead of fixed cleaning routes, allowing them to adapt to complex domestic environments.

Advanced AI systems now enable robot vacuums to recognise obstacles, optimise cleaning sequences and respond to natural language commands. Technologies such as visual recognition and mapping algorithms support adaptive behaviour, improving efficiency while reducing manual input from users.

Market data reflects the shift towards intelligence-led growth.

Global shipments of smart robot vacuums increased by 18.7 percent during the first three quarters of 2025, with manufacturers increasingly competing on intelligent experience rather than suction power, as integration with smart home ecosystems accelerates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eli Lilly and NVIDIA invest in AI-driven pharmaceutical innovation

NVIDIA and Eli Lilly have announced a joint AI co-innovation lab aimed at advancing drug discovery by combining AI with pharmaceutical research.

The partnership combines Lilly’s experience in medical development with NVIDIA’s expertise in accelerated computing and AI infrastructure.

The two companies plan to invest up to $1 billion over five years in research capacity, computing resources and specialist talent.

Based in the San Francisco Bay Area, the lab will support large-scale data generation and model development using NVIDIA platforms, instead of relying solely on traditional laboratory workflows.

Beyond early research, the collaboration is expected to explore applications of AI across manufacturing, clinical development and supply chain operations.

Both NVIDIA and Eli Lilly claim the initiative is designed to enhance efficiency and scalability in medical production while fostering long-term innovation in the life sciences sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Morocco outlines national AI roadmap to 2030

Morocco is preparing to unveil ‘Maroc IA 2030’, a national AI roadmap designed to structure the country’s AI ecosystem and strengthen digital transformation.

The strategy seeks to modernise public services, improve interoperability across digital systems and enhance economic competitiveness, according to officials ahead of the ‘AI Made in Morocco’ event in Rabat.

A central element of the plan involves the creation of Al Jazari Institutes, a national network of AI centres of excellence connecting academic research with innovation and regional economic needs.

A roadmap that prioritises technological autonomy, trusted AI use, skills development, support for local innovation and balanced territorial coverage instead of fragmented deployment.

The initiative builds on the Digital Morocco 2030 strategy launched in 2024, which places AI at the core of national digital policy.

Authorities expect the combined efforts to generate around 240,000 digital jobs and contribute approximately $10 billion to gross domestic product by 2030, while improving the international AI readiness ranking of Morocco.

Additional measures include the establishment of a General Directorate for AI and Emerging Technologies to oversee public policy and the development of an Arab African regional digital hub in partnership with the United Nations Development Programme.

Their main goal is to support sustainable and responsible digital innovation.

Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Multiply Labs targets automation in cell therapy manufacturing

Robotics firm Multiply Labs is introducing automation into cell therapy manufacturing to cut costs by more than 70% and increase output. The startup applies industrial robotics to clean-room environments, replacing slow and contamination-prone manual processes.

Founded in 2016, the San Francisco-based company collaborates with leading cell therapy developers, including Kyverna Therapeutics and Legend Biotech. Its robotic systems perform sterile, precision tasks involved in producing gene-modified cell therapies at scale.

Multiply Labs uses NVIDIA Omniverse to create digital twins of laboratory environments and Isaac Sim to train robots for specialised workflows. Humanoid robots built on NVIDIA’s Isaac GR00T model are also being developed to assist with material handling while maintaining hygiene standards.

Cell therapies involve modifying patient or donor cells to treat various conditions, including cancers, autoimmune diseases, and genetic disorders. The highly customised nature of these treatments makes production costly and sensitive to human error, increasing the risk of failed batches.

By automating thousands of delicate steps, robotics improves consistency, reduces contamination, and preserves expert knowledge. Multiply Labs states that automation could enable the mass production of life-saving therapies at a lower cost and greater availability.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia raises concerns over AI misuse on X

The eSafety regulator in Australia has expressed concern over the misuse of the generative AI system Grok on social media platform X, following reports involving sexualised or exploitative content, particularly affecting children.

Although overall report numbers remain low, authorities in Australia have observed a recent increase over the past weeks.

The regulator confirmed that enforcement powers under the Online Safety Act remain available where content meets defined legal thresholds.

X and other services are subject to systemic obligations requiring the detection and removal of child sexual exploitation material, alongside broader industry codes and safety standards.

eSafety has formally requested further information from X regarding safeguards designed to prevent misuse of generative AI features and to ensure compliance with existing obligations.

Previous enforcement actions taken in 2025 against similar AI services resulted in their withdrawal from the Australian market.

Additional mandatory safety codes will take effect in March 2026, introducing new obligations for AI services to limit children’s exposure to sexually explicit, violent and self-harm-related material.

Authorities emphasised the importance of Safety by Design measures and continued international cooperation among online safety regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered toys navigate safety concerns after early missteps

Toy makers at the Consumer Electronics Show highlighted efforts to improve AI in playthings following troubling early reports of chatbots giving unsuitable responses to children’s questions.

A recent Public Interest Research Group report found that some AI toys, such as an AI-enabled teddy bear, produced inappropriate advice, prompting companies like FoloToy to update their models and suspend problematic products.

Among newer devices, Curio’s Grok toy, which refuses to answer questions deemed inappropriate and allows parental overrides, has earned independent safety certification. However, concerns remain about continuous listening and data privacy.

Experts advise parents to be cautious about toys that retain information over time or engage in ongoing interactions with young users.

Some manufacturers are positioning AI toys as educational tools, for example, language-learning companions with time-limited, guided chat interactions, and others have built in flags to alert parents when inappropriate content arises.

Despite these advances, critics argue that self-regulation is insufficient and call for clearer guardrails and possible regulation to protect children in AI-toy environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Welsh government backs AI adoption with £2.1m support

The Welsh Government is providing £2.1 million in funding to support small and medium-sized businesses across Wales in adopting AI. The initiative aims to promote the ethical and practical use of AI, enhancing productivity and competitiveness.

Business Wales will receive £600,000 to deliver an AI awareness and adoption programme, following recent reviews on SME productivity. Additional funding will enhance tourism and events through targeted AI projects and practical workshops.

A further £1 million will expand AI upskilling through the Flexible Skills Programme, addressing digital skills gaps across regions and sectors. Employers will contribute part of the training costs to support inclusive growth.

Swansea-based Something Different Wholesale is already using AI to automate tasks, analyse market data and improve customer services. Welsh ministers say the funding supports the responsible adoption of AI, aligned with the AI Plan for Wales.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!