UC Santa Cruz uses NVIDIA AI to map global coastal flood risks

Researchers at the University of California, Santa Cruz, are using NVIDIA’s accelerated computing to model coastal flooding and support climate adaptation planning.

Led by Professor Michael Beck, the team develops high-resolution, GPU-powered visualisations to assess how coral reefs, mangroves, and dunes can reduce flood damage.

The centre employs NVIDIA CUDA-X software and RTX GPUs to speed up flood simulations from six hours to just 40 minutes. Using tools such as SFINCS and Unreal Engine 5, the team can now generate interactive visual models of storm impact scenarios, providing vital insights for governments and insurers.

The researchers’ current goal is to map flooding risks across small island states worldwide ahead of COP30. Their previous visualisations have already helped secure reef insurance policies in Mexico’s Mesoamerican Barrier Reef region, ensuring funding for coral restoration after severe storms.

A project, part of CoSMoS ADAPT, that aims to expand the US Geological Survey’s coastal modelling system and integrate nature-based solutions like dunes and reefs into large-scale flood resilience strategies.

Through NVIDIA’s technology and academic grants, the initiative demonstrates how accelerated computing can drive real-world environmental protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK data stays in the UK as OpenAI rolls out residency

OpenAI will offer UK data residency for API Platform, ChatGPT Enterprise, and ChatGPT Edu from October 24. The option, announced by Deputy PM David Lammy, is tied to a Ministry of Justice partnership. The government says it boosts privacy, security, and resilience for public services and business.

Lammy will unveil the ‘sovereign capability’ at OpenAI Frontiers, citing early MoJ efficiency gains. Over 1,000 probation officers will use Justice Transcribe to record and auto-transcribe offender meetings. Hours of admin shift to AI so staff can focus on supervision and public protection.

OpenAI CEO Sam Altman says UK usage has quadrupled in the past year. The company pitches AI as a way to save time and lift productivity across sectors. MoJ pilots have sparked interest from other departments, with broader adoption expected.

Data residency is a key blocker for regulated sectors, and this move aims to address that gap. Keeping data within the UK can simplify compliance and reduce perceived risk. It also underpins continuity plans by localising sensitive workloads.

ChatGPT Atlas, an AI-first web browser, was also announced this week. Its arrival could nudge users away from keyword searches toward conversational answers. OpenAI faces rivals Anthropic, Perplexity, and big tech incumbents in that shift.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI leaders call for a global pause in superintelligence development

More than 850 public figures, including leading computer scientists Geoffrey Hinton and Yoshua Bengio, have signed a joint statement urging a global slowdown in the development of artificial superintelligence.

The open letter warns that unchecked progress could lead to human economic displacement, loss of freedom, and even extinction.

An appeal that follows growing anxiety that the rush toward machines surpassing human cognition could spiral beyond human control. Alan Turing predicted as early as the 1950s that machines might eventually dominate by default, a view that continues to resonate among AI researchers today.

Despite such fears, global powers still view the AI race as essential for national security and technological advancement.

Tech firms like Meta are also exploiting the superintelligence label to promote their most ambitious models, while leaders such as OpenAI’s Sam Altman and Microsoft’s Mustafa Suleyman have previously acknowledged the existential risks of developing systems beyond human understanding.

The statement calls for an international prohibition on superintelligence research until there is a broad scientific consensus on safety and public approval.

Its signatories include technologists, academics, religious figures, and cultural personalities, reflecting a rare cross-sector demand for restraint in an era defined by rapid automation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CMC pegs JLR hack at £1.9bn with 5,000 firms affected

JLR’s cyberattack is pegged at £1.9bn, the UK’s costliest on record. Production paused for five weeks from 1 September across Solihull, Halewood, and Wolverhampton. CMC says 5,000 firms were hit, with full recovery expected by January 2026.

JLR is restoring manufacturing in phases and declined to comment on the estimate. UK dealer systems were intermittently down, orders were cancelled or delayed, and suppliers faced uncertainty. More than half of the losses fall on JLR; the remainder hits its supply chain and local economies.

The CMC classed the incident as Category 3 on its five-level scale. Chair Ciaran Martin warned organisations to harden critical networks and plan for disruption. The CMC’s assessment draws on public data, surveys, and interviews rather than on disclosed forensic evidence.

Researchers say costs hinge on the attack type, which JLR has not confirmed. Data theft is faster to recover than ransomware; wiper malware would be worse. A claimed hacker group linked to earlier high-profile breaches is unverified.

The CMC’s estimate excludes any ransom, which could add tens of millions of dollars. Earlier this year, retail hacks at M&S, the Co-op, and Harrods were tagged Category 2. Those were pegged at £270m–£440m, below the £506m cited by some victims.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

USB inventor and Phison CEO warns of an AI storage crunch

Datuk Pua Khein-Seng, inventor of the single-chip USB flash drive and CEO of Phison, warns that AI machines will generate 1,000 times more data than humans. He says the real bottleneck isn’t GPUs but memory, foreshadowing a global storage crunch as AI scales.

Speaking at GITEX Global, Pua outlined Phison’s focus on NAND controllers and systems that can expand effective memory. Adaptive tiering across DRAM and flash, he argues, will ease constraints and cut costs, making AI deployments more attainable beyond elite data centres.

Flash becomes the expansion valve: DRAM stays scarce and expensive, while high-end GPUs are over-credited for AI cost overruns. By intelligently offloading and caching to NAND, cheaper accelerators can still drive useful workloads, widening access to AI capacity.

Cloud centralisation intensifies the risk. With the US and China dominating the AI cloud market, many countries lack the capital and talent to build sovereign stacks. Pua calls for ‘AI blue-collar’ skills to localise open source and tailor systems to real-world applications.

Storage leadership is consolidating in the US, Japan, Korea, and China, with Taiwan rising as a fifth pillar. Hardware strength alone won’t suffice, Pua says; Taiwan must close the AI software gap to capture more value in the data era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Suzanne Somers lives on in an AI twin

Alan Hamel says he’s moving ahead with a ‘Suzanne AI Twin’ to honor Suzanne Somers’ legacy. The project mirrors plans the couple discussed for decades. He shared an early demo at a recent conference.

Hamel describes the prototype as startlingly lifelike. He says side-by-side, he can’t tell real from AI. The goal is to preserve Suzanne’s voice, look, and mannerisms.

Planned uses include archival storytelling, fan Q&As, and curated appearances. The team is training the model on interviews, performances, and writings. Rights and guardrails are being built in.

Supporters see a new form of remembrance. Critics warn of deepfake risks and consent boundaries. Hamel says fidelity and respect are non-negotiable.

Next steps include wider testing and a controlled public debut. Proceeds could fund causes Suzanne championed. ‘It felt like talking to her,’ Hamel says.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Teachers become intelligence coaches in AI-driven learning

AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.

Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.

Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.

Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.

Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.

The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI strengthens controls after Bryan Cranston deepfake incident

Bryan Cranston is grateful that OpenAI tightened safeguards on its video platform Sora 2. The Breaking Bad actor raised concerns after users generated videos using his voice and image without permission.

Reports surfaced earlier this month showing Sora 2 users creating deepfakes of Cranston and other public figures. Several Hollywood agencies criticised OpenAI for requiring individuals to opt out of replication instead of opting in.

Major talent agencies, including UTA and CAA, co-signed a joint statement with OpenAI and industry unions. They pledged to collaborate on ethical standards for AI-generated media and ensure artists can decide how they are represented.

The incident underscores growing tension between entertainment professionals and AI developers. As generative video tools evolve, performers and studios are demanding clear boundaries around consent and digital replication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chats with ‘Jesus’ spark curiosity and criticism

Text With Jesus, an AI chatbot from Catloaf Software, lets users message figures like ‘Jesus’ and ‘Moses’ for scripture-quoting replies. CEO Stéphane Peter says curiosity is driving rapid growth despite accusations of blasphemy and worries about tech intruding on faith.

Built on OpenAI’s ChatGPT, the app now includes AI pastors and counsellors for questions on scripture, ethics, and everyday dilemmas. Peter, who describes himself as not particularly religious, says the aim is access and engagement, not replacing ministry or community.

Examples range from ‘Do not be anxious…’ (Philippians 4:6) to the Golden Rule (Matthew 7:12), with answers framed in familiar verse. Fans call it a safe, approachable way to explore belief; critics argue only scripture itself should speak.

Faith leaders and commentators have cautioned against mistaking AI outputs for wisdom. The Vatican has stressed that AI is a tool, not truth, and that young people need guidance, not substitution, in spiritual formation.

Reception is sharply split online. Supporters praise convenience and curiosity-spark; detractors cite theological drift, emoji-laden replies, and a ‘Satan’ mode they find chilling. The app holds a 4.7 rating on the Apple App Store from more than 2,700 reviews.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!