USB inventor and Phison CEO warns of an AI storage crunch

Datuk Pua Khein-Seng, inventor of the single-chip USB flash drive and CEO of Phison, warns that AI machines will generate 1,000 times more data than humans. He says the real bottleneck isn’t GPUs but memory, foreshadowing a global storage crunch as AI scales.

Speaking at GITEX Global, Pua outlined Phison’s focus on NAND controllers and systems that can expand effective memory. Adaptive tiering across DRAM and flash, he argues, will ease constraints and cut costs, making AI deployments more attainable beyond elite data centres.

Flash becomes the expansion valve: DRAM stays scarce and expensive, while high-end GPUs are over-credited for AI cost overruns. By intelligently offloading and caching to NAND, cheaper accelerators can still drive useful workloads, widening access to AI capacity.

Cloud centralisation intensifies the risk. With the US and China dominating the AI cloud market, many countries lack the capital and talent to build sovereign stacks. Pua calls for ‘AI blue-collar’ skills to localise open source and tailor systems to real-world applications.

Storage leadership is consolidating in the US, Japan, Korea, and China, with Taiwan rising as a fifth pillar. Hardware strength alone won’t suffice, Pua says; Taiwan must close the AI software gap to capture more value in the data era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Suzanne Somers lives on in an AI twin

Alan Hamel says he’s moving ahead with a ‘Suzanne AI Twin’ to honor Suzanne Somers’ legacy. The project mirrors plans the couple discussed for decades. He shared an early demo at a recent conference.

Hamel describes the prototype as startlingly lifelike. He says side-by-side, he can’t tell real from AI. The goal is to preserve Suzanne’s voice, look, and mannerisms.

Planned uses include archival storytelling, fan Q&As, and curated appearances. The team is training the model on interviews, performances, and writings. Rights and guardrails are being built in.

Supporters see a new form of remembrance. Critics warn of deepfake risks and consent boundaries. Hamel says fidelity and respect are non-negotiable.

Next steps include wider testing and a controlled public debut. Proceeds could fund causes Suzanne championed. ‘It felt like talking to her,’ Hamel says.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek dominates AI crypto trading challenge

Chinese AI model DeepSeek V3.1 has outperformed its global competitors in a real-market cryptocurrency trading challenge, earning over 10 per cent profit in just a few days.

The experiment, named Alpha Arena, was launched by US research firm Nof1 to test the investing skills of leading LLMs.

Each participating AI was given US$10,000 to trade in six cryptocurrency perpetual contracts, including bitcoin and solana, on the decentralised exchange Hyperliquid. By Tuesday afternoon, DeepSeek V3.1 led the field, while OpenAI’s GPT-5 trailed behind with a loss of nearly 40 per cent.

The competition highlights the growing potential of AI models to make autonomous financial decisions in real markets.

It also underscores the rivalry between Chinese and American AI developers as they push to demonstrate their models’ adaptability beyond traditional text-based tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

MIT unveils SEAL, a self-improving AI model

Researchers at the Massachusetts Institute of Technology (MIT) have unveiled SEAL, a new AI model capable of improving its own performance without human intervention. The framework allows the model to generate its own training data and fine-tuning instructions, enabling it to learn new tasks autonomously.

The model employs reinforcement learning, a method in which it tests different strategies, evaluates their effectiveness, and adjusts its internal processes accordingly. This allows SEAL to refine its capabilities and increase accuracy over time.

In trials, SEAL outperformed GPT-4.1 by learning from the data it generated independently. The results demonstrate the potential of self-improving AI systems to reduce reliance on manually curated datasets and human-led fine-tuning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Most EU workers now rely on digital tools and AI

A new EU study finds that 90% of workers rely on digital tools, while nearly a third use AI-powered chatbots in their daily work. The JRC and European Commission surveyed over 70,000 workers across all EU Member States between 2024 and 2025.

The findings show that AI is most commonly used for writing and translation tasks, followed by data processing and image generation. Adoption rates are particularly high in Northern and Central Europe, especially in office-based sectors.

Alongside this digital transformation, workplace monitoring is becoming increasingly widespread, with 37% of EU workers reporting that their working hours are tracked and 36% that their entry and exit times are monitored.

Algorithmic management, where digital systems allocate tasks or assess performance automatically, now affects about a quarter of EU workers. The study also identifies a growing ‘platformisation’ trend, categorising employees based on their exposure to digital monitoring and algorithmic control.

Workers facing full or physical platformisation often report higher stress levels and reduced autonomy, while informational platformisation appears to have milder effects, particularly for remote workers.

Researchers urge EU policymakers to curb digital oversight risks while promoting fair and responsible innovation. The findings support EU initiatives like the Quality Jobs Roadmap and efforts to regulate algorithmic management.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT to exit WhatsApp after Meta policy change

OpenAI says ChatGPT will leave WhatsApp on 15 January 2026 after Meta’s new rules banning general-purpose AI chatbots on the platform. ChatGPT will remain available on iOS, Android, and the web, the company said.

Users are urged to link their WhatsApp number to a ChatGPT account to preserve history, as WhatsApp doesn’t support chat exports. OpenAI will also let users unlink their phone numbers after linking.

Until now, users could message ChatGPT on WhatsApp to ask questions, search the web, generate images, or talk to the assistant. Similar third-party bots offered comparable features.

Meta quietly updated WhatsApp’s business API to prohibit AI providers from accessing or using it, directly or indirectly. The change effectively forces ChatGPT, Perplexity, Luzia, Poke, and others to shut down their WhatsApp bots.

The move highlights platform risk for AI assistants and shifts demand toward native apps and web. Businesses relying on WhatsApp AI automations will need alternatives that comply with Meta’s policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chats with ‘Jesus’ spark curiosity and criticism

Text With Jesus, an AI chatbot from Catloaf Software, lets users message figures like ‘Jesus’ and ‘Moses’ for scripture-quoting replies. CEO Stéphane Peter says curiosity is driving rapid growth despite accusations of blasphemy and worries about tech intruding on faith.

Built on OpenAI’s ChatGPT, the app now includes AI pastors and counsellors for questions on scripture, ethics, and everyday dilemmas. Peter, who describes himself as not particularly religious, says the aim is access and engagement, not replacing ministry or community.

Examples range from ‘Do not be anxious…’ (Philippians 4:6) to the Golden Rule (Matthew 7:12), with answers framed in familiar verse. Fans call it a safe, approachable way to explore belief; critics argue only scripture itself should speak.

Faith leaders and commentators have cautioned against mistaking AI outputs for wisdom. The Vatican has stressed that AI is a tool, not truth, and that young people need guidance, not substitution, in spiritual formation.

Reception is sharply split online. Supporters praise convenience and curiosity-spark; detractors cite theological drift, emoji-laden replies, and a ‘Satan’ mode they find chilling. The app holds a 4.7 rating on the Apple App Store from more than 2,700 reviews.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS outage turned a mundane DNS slip into global chaos

Cloudflare’s boss summed up the mood after Monday’s chaos, relieved his firm wasn’t to blame as outages rippled across more than 1,000 companies. Snapchat, Reddit, Roblox, Fortnite, banks, and government portals faltered together, exposing how much of the web leans on Amazon Web Services.

AWS is the backbone for a vast slice of the internet, renting compute, storage, and databases so firms avoid running their own stacks. However, a mundane Domain Name System error in its Northern Virginia region scrambled routing, leaving services online yet unreachable as traffic lost its map.

Engineers call it a classic failure mode: ‘It’s always DNS.’ Misconfigurations, maintenance slips, or server faults can cascade quickly across shared platforms. AWS says teams moved to mitigate, but the episode showed how a small mistake at scale becomes a global headache in minutes.

Experts warned of concentration risk: when one hyperscaler stumbles, many fall. Yet few true alternatives exist at AWS’s scale beyond Microsoft Azure and Google Cloud, with smaller rivals from IBM to Alibaba, and fledgling European plays, far behind.

Calls for UKEU cloud sovereignty are growing, but timelines and costs are steep. Monday’s outage is a reminder that resilience needs multi-region and multi-cloud designs, tested failovers, and clear incident comms, not just faith in a single provider.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI still struggles to mimic natural human conversation

A recent study reveals that large language models such as ChatGPT-4, Claude, Vicuna, and Wayfarer still struggle to replicate natural human conversation. Researchers found AI over-imitates, misuses filler words, and struggles with natural openings and closings, revealing its artificial nature.

The research, led by Eric Mayor with contributions from Lucas Bietti and Adrian Bangerter, compared transcripts of human phone conversations with AI-generated ones. AI can speak correctly, but subtle social cues like timing, phrasing, and discourse markers remain hard to mimic.

Misplaced words such as ‘so’ or ‘well’ and awkward conversation transitions make AI dialogue recognisably non-human. Openings and endings also pose a challenge. Humans naturally engage in small talk or closing phrases such as ‘see you soon’ or ‘alright, then,’ which AI systems often fail to reproduce convincingly.

These gaps in social nuance, researchers argue, prevent large language models from consistently fooling people in conversation tests.

Despite rapid progress, experts caution that AI may never fully capture all elements of human interaction, such as empathy and social timing. Advances may narrow the gap, but key differences will likely remain, keeping AI speech subtly distinguishable from real human dialogue.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot