Alan Hamel says he’s moving ahead with a ‘Suzanne AI Twin’ to honor Suzanne Somers’ legacy. The project mirrors plans the couple discussed for decades. He shared an early demo at a recent conference.
Hamel describes the prototype as startlingly lifelike. He says side-by-side, he can’t tell real from AI. The goal is to preserve Suzanne’s voice, look, and mannerisms.
Planned uses include archival storytelling, fan Q&As, and curated appearances. The team is training the model on interviews, performances, and writings. Rights and guardrails are being built in.
Supporters see a new form of remembrance. Critics warn of deepfake risks and consent boundaries. Hamel says fidelity and respect are non-negotiable.
Next steps include wider testing and a controlled public debut. Proceeds could fund causes Suzanne championed. ‘It felt like talking to her,’ Hamel says.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is reshaping education, pushing teachers to act as intelligence coaches and co-creators instead of traditional instructors.
Experts at an international conference, hosted in Greece, to celebrate Athens College’s centennial, discussed how AI personalises learning and demands a redefined teaching role.
Bill McDiarmid, professor emeritus at the University of North Carolina, said educators must now ask students where they find their information and why they trust it.
Similarly, Yong Zhao of the University of Kansas highlighted that AI enables individualised learning, allowing every student to achieve their full potential.
Speakers agreed AI should serve as a supportive partner, not a replacement, helping schools prepare students for an active role in shaping their futures.
The event, held under Greek President Konstantinos Tasoulas’ auspices, also urged caution when experimenting with AI on minors due to potential long-term risks.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Reports surfaced earlier this month showing Sora 2 users creating deepfakes of Cranston and other public figures. Several Hollywood agencies criticised OpenAI for requiring individuals to opt out of replication instead of opting in.
Major talent agencies, including UTA and CAA, co-signed a joint statement with OpenAI and industry unions. They pledged to collaborate on ethical standards for AI-generated media and ensure artists can decide how they are represented.
The incident underscores growing tension between entertainment professionals and AI developers. As generative video tools evolve, performers and studios are demanding clear boundaries around consent and digital replication.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.
The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.
Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.
The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.
CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Text With Jesus, an AI chatbot from Catloaf Software, lets users message figures like ‘Jesus’ and ‘Moses’ for scripture-quoting replies. CEO Stéphane Peter says curiosity is driving rapid growth despite accusations of blasphemy and worries about tech intruding on faith.
Built on OpenAI’s ChatGPT, the app now includes AI pastors and counsellors for questions on scripture, ethics, and everyday dilemmas. Peter, who describes himself as not particularly religious, says the aim is access and engagement, not replacing ministry or community.
Examples range from ‘Do not be anxious…’ (Philippians 4:6) to the Golden Rule (Matthew 7:12), with answers framed in familiar verse. Fans call it a safe, approachable way to explore belief; critics argue only scripture itself should speak.
Faith leaders and commentators have cautioned against mistaking AI outputs for wisdom. The Vatican has stressed that AI is a tool, not truth, and that young people need guidance, not substitution, in spiritual formation.
Reception is sharply split online. Supporters praise convenience and curiosity-spark; detractors cite theological drift, emoji-laden replies, and a ‘Satan’ mode they find chilling. The app holds a 4.7 rating on the Apple App Store from more than 2,700 reviews.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Anthropic has unveiled Claude for Life Sciences, its first major launch in the biotechnology sector.
The new platform integrates Anthropic’s AI models with leading scientific tools such as Benchling, PubMed, 10x Genomics and Synapse.org, offering researchers an intelligent assistant throughout the discovery process.
The system supports tasks from literature reviews and hypothesis development to data analysis and drafting regulatory submissions. According to Anthropic, what once took days of validation and manual compilation can now be completed in minutes, giving scientists more time to focus on innovation.
An initiative that follows the company’s appointment of Eric Kauderer-Abrams as head of biology and life sciences. He described the move as a ‘threshold moment’, signalling Anthropic’s ambition to make Claude a key player in global life science research, much like its role in coding.
Built on the newly released Claude Sonnet 4.5 model, which excels at interpreting lab protocols, the platform connects with partners including AWS, Google Cloud, KPMG and Deloitte.
While Anthropic recognises that AI cannot accelerate physical trials, it aims to transform time-consuming processes and promote responsible digital transformation across the life sciences.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.
Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.
The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.
The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Bilal Abu-Ghazaleh has launched 1001 AI, a London–Dubai startup building an AI-native operating system for critical MENA industries. The two-month-old firm raised $9m seed from CIV, General Catalyst and Lux Capital, with angels including Chris Ré, Amjad Masad and Amira Sajwani.
Target sectors include airports, ports, construction, and oil and gas, where 1001 AI sees billions in avoidable inefficiencies. Its engine ingests live operational data, models workflows and issues real-time directives, rerouting vehicles, reassigning crews and adjusting plans autonomously.
Abu-Ghazaleh brings scale-up experience from Hive AI and Scale AI, where he led GenAI operations and contributor networks. 1001 borrows a consulting-style rollout: embed with clients, co-develop the model, then standardise reusable patterns across similar operational flows.
Investors argue the Gulf is an ideal test bed given sovereign-backed AI ambitions and under-digitised, mission-critical infrastructure. Deena Shakir of Lux says the region is ripe for AI that optimises physical operations at scale, from flight turnarounds to cargo moves.
First deployments are slated for construction by year-end, with aviation and logistics to follow. The funding supports early pilots and hiring across engineering, operations and go-to-market, as 1001 aims to become the Gulf’s orchestration layer before expanding globally.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Wikipedia’s human traffic has fallen by 8% over the past year, a decline the Wikimedia Foundation attributes to changing information habits driven by AI and social media.
The foundation’s Marshall Miller explained that updates to Wikipedia’s bot detection system revealed much of the earlier traffic surge came from undetected bots, revealing a sharper drop in genuine visits.
Miller pointed to the growing use of AI-generated search summaries and the rise of short-form video as key factors. Search engines now provide direct answers using generative AI instead of linking to external sources, while younger users increasingly turn to social video platforms rather than traditional websites.
Although Wikipedia’s knowledge continues to feed AI models, fewer people are reaching the original source.
The foundation warns that the shift poses risks to Wikipedia’s volunteer-driven ecosystem and donation-based model. With fewer visitors, fewer contributors may update content and fewer donors may provide financial support.
Miller urged AI companies and search engines to direct users back to the encyclopedia, ensuring both transparency and sustainability.
Wikipedia is responding by developing a new framework for content attribution and expanding efforts to reach new readers. The foundation also encourages users to support human-curated knowledge by citing original sources and recognising the people behind the information that powers AI systems.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI signalled a break with Australia’s tech lobby on copyright, with global affairs chief Chris Lehane telling SXSW Sydney the company’s models are ‘going to be in Australia, one way or the other’, regardless of reforms or data-mining exemptions.
Lehane framed two global approaches: US-style fair use that enables ‘frontier’ AI, versus a tighter, historical copyright that narrows scope, saying OpenAI will work under either regime. Asked if Australia risked losing datacentres without loser laws, he replied ‘No’.
Pressed on launching and monetising Sora 2 before copyright issues are settled, Lehane argued innovation precedes adaptation and said OpenAI aims to ‘benefit everyone’. The company paused videos featuring Martin Luther King Jr.’s likeness after family complaints.
Lehane described the US-China AI rivalry as a ‘very real competition’ over values, predicting that one ecosystem will become the default. He said US-led frontier models would reflect democratic norms, while China’s would ‘probably’ align with autocratic ones.
To sustain a ‘democratic lead’, Lehane said allies must add gigawatt-scale power capacity each week to build AI infrastructure. He called Australia uniquely positioned, citing high AI usage, a 30,000-strong developer base, fibre links to Asia, Five Eyes membership, and fast-growing renewables.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!