World Liberty Financial’s WLTC Holdings LLC has applied with the Office of the Comptroller of the Currency to establish World Liberty Trust Company, National Association (WLTC), a national trust bank designed for stablecoin operations.
The move aims to centralise issuance, custody, and conversion of USD1, the company’s dollar-backed stablecoin. USD1 has grown rapidly, reaching over $3.3 billion in circulation during its first year.
The trust company will serve institutional clients, providing stablecoin conversion and secure custody for USD1 and other supported stablecoins.
WLTC will operate under federal supervision, offering fee-free USD1 issuance and redemption, USD conversion, and custody with market-rate conversions. Operations will comply with the GENIUS Act and follow strict AML, sanctions, and cybersecurity protocols.
The stablecoin is fully backed by US dollars and short-duration Treasury obligations, operating across ten blockchain networks, including Ethereum, Solana, and TRON.
By combining regulatory oversight with full-stack stablecoin services, WLTC seeks to provide institutional clients with clarity and efficiency in digital asset operations.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Portola has launched Tolan, a voice-first AI companion that learns from ongoing conversations through personalised, animated characters. Tolan is designed for open-ended dialogue, making voice interactions more natural and engaging than standard text-based AI.
Built around memory and character design, the platform uses real-time context reconstruction to maintain personality and track shifting topics. Each turn, the system retrieves user memories, persona traits, and conversation tone, enabling coherent, adaptive responses.
GPT‑5.1 has improved latency, steerability, and consistency, reducing memory recall errors by 30% and boosting next-day retention over 20%.
Tolan’s architecture combines fast vector-based memory, dynamic emotional adjustment, and layered persona scaffolds. Sub-second responses and context rebuilding help the AI handle topic changes, maintain tone, and feel more human-like.
Since February 2025, Tolan has gained over 200,000 monthly users, earning a 4.8-star rating on the App Store. Future plans focus on multimodal voice agents integrating vision, context, and enhanced steerability to expand the boundaries of interactive AI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Thyroid cancer, the most common endocrine malignancy, poses challenges for surgeons trying to remove tumours while preserving healthy tissue.
Fine-needle aspiration and pathology are accurate but slow, providing no real-time guidance and sometimes causing unnecessary or incomplete surgeries. Dynamic Optical Contrast Imaging (DOCI) uses cells’ natural light to quickly distinguish healthy tissue from cancer.
The technique captures 23 optical channels from freshly excised tissue, creating detailed spectral maps without dyes or contrast agents. These optical signatures allow for rapid, label-free tissue analysis.
Researchers at Duke University and UCLA combined DOCI with AI to improve accuracy in classification and localisation. A two-stage machine-learning approach first categorised tissue as healthy or cancerous, including common and aggressive thyroid cancer subtypes.
Deep-learning models then produced tumour probability maps, pinpointing cancerous regions with minimal false positives.
Although initial studies focused on post-excision tissue, the technology could soon offer surgeons real-time guidance in the operating room. By combining optical imaging with AI, DOCI may reduce unnecessary surgery, preserve healthy tissue, and improve outcomes for thyroid cancer patients.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The UK government has announced new measures to strengthen the security and resilience of online public services as more interactions with the state move online. Ministers say public confidence is essential as citizens increasingly rely on digital systems for everyday services.
Backed by more than £210 million, the UK Government Cyber Action Plan outlines how cyber defences and digital resilience will be improved across the public sector. A new Government Cyber Unit will coordinate risk identification, incident response, and action on complex threats spanning multiple departments.
The plan underpins wider efforts to digitise public services, including benefits applications, tax payments, and healthcare access. Officials argue that secure systems can reduce bureaucracy and improve efficiency, but only if users trust that their data is protected.
The announcement coincides with parliamentary debate on the Cyber Security and Resilience Bill, which sets clearer expectations for companies supplying services to the government. The legislation is intended to strengthen cyber resilience across critical supply chains.
Ministers also highlighted new steps to address software supply chain risks, including a Software Security Ambassador Scheme promoting basic security practices. The government says stronger cyber resilience is essential to protect public services and maintain public trust.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has launched ChatGPT Health, a secure platform linking users’ health information with ChatGPT’s intelligence. The platform supports, rather than replaces, medical care, helping users understand test results, prepare for appointments, and manage their wellness.
ChatGPT Health allows users to safely connect medical records and apps such as Apple Health, Function, and MyFitnessPal. All data is stored in a separate Health space with encryption and enhanced privacy to keep sensitive information secure.
Conversations in Health are not used to train OpenAI’s models.
The platform was developed with input from more than 260 physicians worldwide, ensuring guidance is accurate, clinically relevant, and prioritises safety.
HealthBench, a physician-informed evaluation framework, helps measure quality, clarity, and appropriate escalation in responses, supporting users in making informed decisions about their health.
ChatGPT Health is initially available outside the EEA, Switzerland, and the UK, with wider access coming in the coming weeks. Users can sign up for a waitlist and begin connecting records and wellness apps to receive personalised, context-aware health insights.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The online gaming platform, Roblox, has begun a global rollout requiring facial age checks before users can access chat features, expanding a system first tested in selected regions late last year.
The measure applies wherever chat is available and aims to create age-appropriate communication environments across the platform.
Instead of relying on self-declared ages, Roblox uses facial age estimation to group users and restrict interactions, limiting contact between adults and children under 16. Younger users need parental consent to chat, while verified users aged 13 and over can connect more freely through Trusted Connections.
The company says privacy safeguards remain central, with images deleted immediately after secure processing and no image sharing allowed in chat. Appeals, ID verification and parental controls support accuracy, while ongoing behavioural checks may trigger repeat age verification if discrepancies appear.
Roblox plans to extend age checks beyond chat later in 2026, including creator tools and community features, as part of a broader push to strengthen online safety and rebuild trust in youth-focused digital platforms.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The US tech company, Meta, has paused the international launch of its Ray-Ban Display smart glasses after seeing higher-than-expected demand in the US.
Meta had planned to begin selling the glasses in the UK, France, Italy and Canada in early 2026, but will now prioritise fulfilling US orders instead of expanding availability.
These smart glasses work with the Meta Neural Band wrist device, which interprets small hand movements.
Meta demonstrated new tools at CES in Las Vegas, including a teleprompter mode for delivering prepared remarks and a feature that lets users write messages by moving a finger across any surface while wearing the Neural Band. Pedestrian navigation support is also being extended to additional US cities.
Meta says demand has created waiting lists stretching well into 2026, prompting the pause while it reassesses global rollout plans.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
The South Korean tech giant, Samsung, used CES 2026 to foreground a cross-industry debate about trust, privacy and security in the age of AI.
During its Tech Forum session in Las Vegas, senior figures from AI research and industry argued that people will only fully accept AI when systems behave predictably, and users retain clear control instead of feeling locked inside opaque technologies.
Samsung outlined a trust-by-design philosophy centred on transparency, clarity and accountability. On-device AI was presented as a way to keep personal data local wherever possible, while cloud processing can be used selectively when scale is required.
Speakers said users increasingly want to know when AI is in operation, where their data is processed and how securely it is protected.
Security remained the core theme. Samsung highlighted its Knox platform and Knox Matrix to show how devices can authenticate one another and operate as a shared layer of protection.
Partnerships with companies such as Google and Microsoft were framed as essential for ecosystem-wide resilience. Although misinformation and misuse were recognised as real risks, the panel suggested that technological counter-measures will continue to develop alongside AI systems.
Consumer behaviour formed a final point of discussion. Amy Webb noted that people usually buy products for convenience rather than trust alone, meaning that AI will gain acceptance when it genuinely improves daily life.
The panel concluded that AI systems which embed transparency, robust security and meaningful user choice from the outset are most likely to earn long-term public confidence.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
China’s cyberspace regulator has proposed new limits on AI ‘boyfriend’ and ‘girlfriend’ chatbots, tightening oversight of emotionally interactive artificial intelligence services.
Draft rules released on 27 December would require platforms to intervene when users express suicidal or self-harm tendencies, while strengthening protections for minors and restricting harmful content.
The regulator defines the services as AI systems that simulate human personality traits and emotional interaction. The proposals are open for public consultation until 25 January.
The draft bans chatbots from encouraging suicide, engaging in emotional manipulation, or producing obscene, violent, or gambling-related content. Minors would need guardian consent to access AI companionship.
Platforms would also be required to disclose clearly that users are interacting with AI rather than humans. Legal experts in China warn that enforcement may be challenging, particularly in identifying suicidal intent through language cues alone.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
UK Technology Secretary Liz Kendall has urged Elon Musk’s X to act urgently after reports that its AI chatbot Grok was used to generate non-consensual sexualised deepfake images of women and girls.
The BBC identified multiple examples on X where users prompted Grok to digitally alter images, including requests to make people appear undressed or place them in sexualised scenarios without consent.
Kendall described the content as ‘absolutely appalling’ and said the government would not allow the spread of degrading images. She added that Ofcom had her full backing to take enforcement action where necessary.
The UK media regulator confirmed it had made urgent contact with xAI and was investigating concerns that Grok had produced undressed images of individuals. X has been approached for comment.
Kendall said the issue was about enforcing the law rather than limiting speech, noting that intimate image abuse, including AI-generated content, is now a priority offence under the Online Safety Act.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!