OpenAI strengthens controls after Bryan Cranston deepfake incident

Bryan Cranston is grateful that OpenAI tightened safeguards on its video platform Sora 2. The Breaking Bad actor raised concerns after users generated videos using his voice and image without permission.

Reports surfaced earlier this month showing Sora 2 users creating deepfakes of Cranston and other public figures. Several Hollywood agencies criticised OpenAI for requiring individuals to opt out of replication instead of opting in.

Major talent agencies, including UTA and CAA, co-signed a joint statement with OpenAI and industry unions. They pledged to collaborate on ethical standards for AI-generated media and ensure artists can decide how they are represented.

The incident underscores growing tension between entertainment professionals and AI developers. As generative video tools evolve, performers and studios are demanding clear boundaries around consent and digital replication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Innovation versus risk shapes Australia’s AI debate

Australia’s business leaders were urged to adopt AI now to stay competitive, despite the absence of hard rules, at the AI Leadership Summit in Brisbane. The National AI Centre unveiled revised voluntary guidelines, and Assistant Minister Andrew Charlton said a national AI plan will arrive later this year.

The guidance sets six priorities, from stress-testing and human oversight to clearer accountability, aiming to give boards practical guardrails. Speakers from NVIDIA, OpenAI, and legal and academic circles welcomed direction but pressed for certainty to unlock stalled investment.

Charlton said the plan will focus on economic opportunity, equitable access, and risk mitigation, noting some harms are already banned, including ‘nudify’ apps. He argued Australia will be poorer if it hesitates, and regulators must be ready to address new threats directly.

The debate centred on proportional regulation: too many rules could stifle innovation, said Clayton Utz partner Simon Newcomb, yet delays and ambiguity can also chill projects. A ‘gap analysis’ announced by Treasurer Jim Chalmers will map which risks existing laws already cover.

CyberCX’s Alastair MacGibbon warned that criminals are using AI to deliver sharper phishing attacks and flagged the return of erotic features in some chatbots as an oversight test. His message echoed across panels: move fast with governance, or risk ceding both competitiveness and safety.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chats with ‘Jesus’ spark curiosity and criticism

Text With Jesus, an AI chatbot from Catloaf Software, lets users message figures like ‘Jesus’ and ‘Moses’ for scripture-quoting replies. CEO Stéphane Peter says curiosity is driving rapid growth despite accusations of blasphemy and worries about tech intruding on faith.

Built on OpenAI’s ChatGPT, the app now includes AI pastors and counsellors for questions on scripture, ethics, and everyday dilemmas. Peter, who describes himself as not particularly religious, says the aim is access and engagement, not replacing ministry or community.

Examples range from ‘Do not be anxious…’ (Philippians 4:6) to the Golden Rule (Matthew 7:12), with answers framed in familiar verse. Fans call it a safe, approachable way to explore belief; critics argue only scripture itself should speak.

Faith leaders and commentators have cautioned against mistaking AI outputs for wisdom. The Vatican has stressed that AI is a tool, not truth, and that young people need guidance, not substitution, in spiritual formation.

Reception is sharply split online. Supporters praise convenience and curiosity-spark; detractors cite theological drift, emoji-laden replies, and a ‘Satan’ mode they find chilling. The app holds a 4.7 rating on the Apple App Store from more than 2,700 reviews.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic unveils Claude Life Sciences to transform research efficiency

Anthropic has unveiled Claude for Life Sciences, its first major launch in the biotechnology sector.

The new platform integrates Anthropic’s AI models with leading scientific tools such as Benchling, PubMed, 10x Genomics and Synapse.org, offering researchers an intelligent assistant throughout the discovery process.

The system supports tasks from literature reviews and hypothesis development to data analysis and drafting regulatory submissions. According to Anthropic, what once took days of validation and manual compilation can now be completed in minutes, giving scientists more time to focus on innovation.

An initiative that follows the company’s appointment of Eric Kauderer-Abrams as head of biology and life sciences. He described the move as a ‘threshold moment’, signalling Anthropic’s ambition to make Claude a key player in global life science research, much like its role in coding.

Built on the newly released Claude Sonnet 4.5 model, which excels at interpreting lab protocols, the platform connects with partners including AWS, Google Cloud, KPMG and Deloitte.

While Anthropic recognises that AI cannot accelerate physical trials, it aims to transform time-consuming processes and promote responsible digital transformation across the life sciences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Startup raises $9m to orchestrate Gulf digital infrastructure

Bilal Abu-Ghazaleh has launched 1001 AI, a London–Dubai startup building an AI-native operating system for critical MENA industries. The two-month-old firm raised $9m seed from CIV, General Catalyst and Lux Capital, with angels including Chris Ré, Amjad Masad and Amira Sajwani.

Target sectors include airports, ports, construction, and oil and gas, where 1001 AI sees billions in avoidable inefficiencies. Its engine ingests live operational data, models workflows and issues real-time directives, rerouting vehicles, reassigning crews and adjusting plans autonomously.

Abu-Ghazaleh brings scale-up experience from Hive AI and Scale AI, where he led GenAI operations and contributor networks. 1001 borrows a consulting-style rollout: embed with clients, co-develop the model, then standardise reusable patterns across similar operational flows.

Investors argue the Gulf is an ideal test bed given sovereign-backed AI ambitions and under-digitised, mission-critical infrastructure. Deena Shakir of Lux says the region is ripe for AI that optimises physical operations at scale, from flight turnarounds to cargo moves.

First deployments are slated for construction by year-end, with aviation and logistics to follow. The funding supports early pilots and hiring across engineering, operations and go-to-market, as 1001 aims to become the Gulf’s orchestration layer before expanding globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia faces traffic decline as AI and social video reshape online search

Wikipedia’s human traffic has fallen by 8% over the past year, a decline the Wikimedia Foundation attributes to changing information habits driven by AI and social media.

The foundation’s Marshall Miller explained that updates to Wikipedia’s bot detection system revealed much of the earlier traffic surge came from undetected bots, revealing a sharper drop in genuine visits.

Miller pointed to the growing use of AI-generated search summaries and the rise of short-form video as key factors. Search engines now provide direct answers using generative AI instead of linking to external sources, while younger users increasingly turn to social video platforms rather than traditional websites.

Although Wikipedia’s knowledge continues to feed AI models, fewer people are reaching the original source.

The foundation warns that the shift poses risks to Wikipedia’s volunteer-driven ecosystem and donation-based model. With fewer visitors, fewer contributors may update content and fewer donors may provide financial support.

Miller urged AI companies and search engines to direct users back to the encyclopedia, ensuring both transparency and sustainability.

Wikipedia is responding by developing a new framework for content attribution and expanding efforts to reach new readers. The foundation also encourages users to support human-curated knowledge by citing original sources and recognising the people behind the information that powers AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lehane backs OpenAI’s Australia presence as AI copyright debate heats up

OpenAI signalled a break with Australia’s tech lobby on copyright, with global affairs chief Chris Lehane telling SXSW Sydney the company’s models are ‘going to be in Australia, one way or the other’, regardless of reforms or data-mining exemptions.

Lehane framed two global approaches: US-style fair use that enables ‘frontier’ AI, versus a tighter, historical copyright that narrows scope, saying OpenAI will work under either regime. Asked if Australia risked losing datacentres without loser laws, he replied ‘No’.

Pressed on launching and monetising Sora 2 before copyright issues are settled, Lehane argued innovation precedes adaptation and said OpenAI aims to ‘benefit everyone’. The company paused videos featuring Martin Luther King Jr.’s likeness after family complaints.

Lehane described the US-China AI rivalry as a ‘very real competition’ over values, predicting that one ecosystem will become the default. He said US-led frontier models would reflect democratic norms, while China’s would ‘probably’ align with autocratic ones.

To sustain a ‘democratic lead’, Lehane said allies must add gigawatt-scale power capacity each week to build AI infrastructure. He called Australia uniquely positioned, citing high AI usage, a 30,000-strong developer base, fibre links to Asia, Five Eyes membership, and fast-growing renewables.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft to support UAE investment analytics with responsible AI tools

The UAE Ministry of Investment and Microsoft signed a Memorandum of Understanding at GITEX Global 2025 to apply AI to investment analytics, financial forecasting, and retail optimisation. The deal aims to strengthen data governance across the investment ecosystem.

Under the MoU, Microsoft will support upskilling through its AI National Skilling Initiative, targeting 100,000 government employees. Training will focus on practical adoption, responsible use, and measurable outcomes, in line with the UAE’s National AI Strategy 2031.

Both parties will promote best practices in data management using Azure services such as Data Catalog and Purview. Workshops and knowledge-sharing sessions with local experts will standardise governance. Strong controls are positioned as the foundation for trustworthy AI at scale.

The agreement was signed by His Excellency Mohammad Alhawi and Amr Kamel. Officials say the collaboration will embed AI agents into workflows while maintaining compliance. Investment teams are expected to gain real-time insights and automation that shorten the time to action.

The partnership supports the ambition to make the UAE a leader in AI-enabled investment. It also signals deeper public–private collaboration on sovereign capabilities. With skills, standards, and use cases in place, the ministry aims to attract capital and accelerate diversification.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scaling a cell ‘language’ model yields new immunotherapy leads

Yale University and Google unveiled Cell2Sentence-Scale 27B, a 27-billion-parameter model built on Gemma to decode the ‘language’ of cells. The system generated a novel hypothesis about cancer cell behaviour, and CEO Sundar Pichai called it ‘an exciting milestone’ for AI in science.

The work targets a core problem in immunotherapy: many tumours are ‘cold’ and evade immune detection. Making them visible requires boosting antigen presentation. C2S-Scale sought a ‘conditional amplifier’ drug that boosts signals only in immune-context-positive settings.

Smaller models lacked the reasoning to solve the problem, but scaling to 27B parameters unlocked the capability. The team then simulated 4,000 drugs across patient samples. The model flagged context-specific boosters of antigen presentation, with 10–30% already known and the rest entirely novel.

Researchers emphasise that conditional amplification aims to raise immune signals only where key proteins are present. That could reduce off-target effects and make ‘cold’ tumours discoverable. The result hints at AI-guided routes to more precise cancer therapies.

Google has released C2S-Scale 27B on GitHub and Hugging Face for the community to explore. The approach blends large-scale language modelling with cell biology, signalling a new toolkit for hypothesis generation, drug prioritisation, and patient-relevant testing.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!