USB inventor and Phison CEO warns of an AI storage crunch

Datuk Pua Khein-Seng, inventor of the single-chip USB flash drive and CEO of Phison, warns that AI machines will generate 1,000 times more data than humans. He says the real bottleneck isn’t GPUs but memory, foreshadowing a global storage crunch as AI scales.

Speaking at GITEX Global, Pua outlined Phison’s focus on NAND controllers and systems that can expand effective memory. Adaptive tiering across DRAM and flash, he argues, will ease constraints and cut costs, making AI deployments more attainable beyond elite data centres.

Flash becomes the expansion valve: DRAM stays scarce and expensive, while high-end GPUs are over-credited for AI cost overruns. By intelligently offloading and caching to NAND, cheaper accelerators can still drive useful workloads, widening access to AI capacity.

Cloud centralisation intensifies the risk. With the US and China dominating the AI cloud market, many countries lack the capital and talent to build sovereign stacks. Pua calls for ‘AI blue-collar’ skills to localise open source and tailor systems to real-world applications.

Storage leadership is consolidating in the US, Japan, Korea, and China, with Taiwan rising as a fifth pillar. Hardware strength alone won’t suffice, Pua says; Taiwan must close the AI software gap to capture more value in the data era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Suzanne Somers lives on in an AI twin

Alan Hamel says he’s moving ahead with a ‘Suzanne AI Twin’ to honor Suzanne Somers’ legacy. The project mirrors plans the couple discussed for decades. He shared an early demo at a recent conference.

Hamel describes the prototype as startlingly lifelike. He says side-by-side, he can’t tell real from AI. The goal is to preserve Suzanne’s voice, look, and mannerisms.

Planned uses include archival storytelling, fan Q&As, and curated appearances. The team is training the model on interviews, performances, and writings. Rights and guardrails are being built in.

Supporters see a new form of remembrance. Critics warn of deepfake risks and consent boundaries. Hamel says fidelity and respect are non-negotiable.

Next steps include wider testing and a controlled public debut. Proceeds could fund causes Suzanne championed. ‘It felt like talking to her,’ Hamel says.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek dominates AI crypto trading challenge

Chinese AI model DeepSeek V3.1 has outperformed its global competitors in a real-market cryptocurrency trading challenge, earning over 10 per cent profit in just a few days.

The experiment, named Alpha Arena, was launched by US research firm Nof1 to test the investing skills of leading LLMs.

Each participating AI was given US$10,000 to trade in six cryptocurrency perpetual contracts, including bitcoin and solana, on the decentralised exchange Hyperliquid. By Tuesday afternoon, DeepSeek V3.1 led the field, while OpenAI’s GPT-5 trailed behind with a loss of nearly 40 per cent.

The competition highlights the growing potential of AI models to make autonomous financial decisions in real markets.

It also underscores the rivalry between Chinese and American AI developers as they push to demonstrate their models’ adaptability beyond traditional text-based tasks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kenya leads the way in AI skilling across Africa

Kenya’s AI Skilling Initiative (AINSI) is offering valuable insights for African countries aiming to build digital capabilities. With AI projected to create 230 million digital jobs across Africa by 2030, coordinated investment in skills development is vital to unlock this potential.

Despite growing ambition, fragmented efforts and uneven progress continue to limit impact.

Government leadership plays a central role in building national AI capacity. Kenya’s Regional Centre of Competence for Digital and AI Skilling has trained thousands of public servants through structured bootcamps and online programmes.

Standardising credentials and aligning training with industry needs are crucial to ensure skilling efforts translate into meaningful employment.

Industry and the informal economy are key to scaling transformation. Partnerships with KEPSA and MESH are training entrepreneurs and SMEs in AI and cybersecurity while tackling affordability, connectivity, and data access challenges.

Education initiatives, from K–12 to universities and technical institutions, are embedding AI training into curricula to prepare future generations.

Civil society collaboration further broadens access, with community-based programmes reaching gig workers and underserved groups. Kenya’s approach shows how inclusive, cross-sector frameworks can scale digital skills and support Africa’s AI-driven growth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netherlands and China in talks to resolve Nexperia dispute

The Dutch Economy Minister has spoken with his Chinese counterpart to ease tensions following the Netherlands’ recent seizure of Nexperia, a major Dutch semiconductor firm.

China, where most of Nexperia’s chips are produced and sold, reacted by blocking exports, creating concern among European carmakers reliant on its components.

Vincent Karremans said he had discussed ‘further steps towards reaching a solution’ with Chinese Minister of Commerce Wang Wentao.

Both sides emphasised the importance of finding an outcome that benefits Nexperia, as well as the Chinese and European economies.

Meanwhile, Nexperia’s China division has begun asserting its independence, telling employees they may reject ‘external instructions’.

The firm remains a subsidiary of Shanghai-listed Wingtech, which has faced growing scrutiny from European regulators over national security and strategic technology supply chains.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT to exit WhatsApp after Meta policy change

OpenAI says ChatGPT will leave WhatsApp on 15 January 2026 after Meta’s new rules banning general-purpose AI chatbots on the platform. ChatGPT will remain available on iOS, Android, and the web, the company said.

Users are urged to link their WhatsApp number to a ChatGPT account to preserve history, as WhatsApp doesn’t support chat exports. OpenAI will also let users unlink their phone numbers after linking.

Until now, users could message ChatGPT on WhatsApp to ask questions, search the web, generate images, or talk to the assistant. Similar third-party bots offered comparable features.

Meta quietly updated WhatsApp’s business API to prohibit AI providers from accessing or using it, directly or indirectly. The change effectively forces ChatGPT, Perplexity, Luzia, Poke, and others to shut down their WhatsApp bots.

The move highlights platform risk for AI assistants and shifts demand toward native apps and web. Businesses relying on WhatsApp AI automations will need alternatives that comply with Meta’s policies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!