Lehane backs OpenAI’s Australia presence as AI copyright debate heats up

OpenAI signalled a break with Australia’s tech lobby on copyright, with global affairs chief Chris Lehane telling SXSW Sydney the company’s models are ‘going to be in Australia, one way or the other’, regardless of reforms or data-mining exemptions.

Lehane framed two global approaches: US-style fair use that enables ‘frontier’ AI, versus a tighter, historical copyright that narrows scope, saying OpenAI will work under either regime. Asked if Australia risked losing datacentres without loser laws, he replied ‘No’.

Pressed on launching and monetising Sora 2 before copyright issues are settled, Lehane argued innovation precedes adaptation and said OpenAI aims to ‘benefit everyone’. The company paused videos featuring Martin Luther King Jr.’s likeness after family complaints.

Lehane described the US-China AI rivalry as a ‘very real competition’ over values, predicting that one ecosystem will become the default. He said US-led frontier models would reflect democratic norms, while China’s would ‘probably’ align with autocratic ones.

To sustain a ‘democratic lead’, Lehane said allies must add gigawatt-scale power capacity each week to build AI infrastructure. He called Australia uniquely positioned, citing high AI usage, a 30,000-strong developer base, fibre links to Asia, Five Eyes membership, and fast-growing renewables.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New method helps AI models locate personalised objects in scenes

MIT and the MIT-IBM Watson AI Lab have developed a training approach that enables generative vision-language models to localise personalised objects (for example, a specific cat) across new scenes, a task at which they previously performed poorly.

While vision-language models (VLMs) are good at recognising generic object categories (dogs, chairs, etc.), they struggle when asked to point out your specific dog or chair under different conditions.

To remedy this, the researchers framed a fine-tuning regime using video-tracking datasets, where the same object appears in multiple frames.

Crucially, they used pseudo-names (e.g. ‘Charlie’) instead of real object names to prevent the model from relying on memorised label associations. This encourages it to reason about context, scene layout, appearance cues, and relative position, rather than shortcut to category matches.

AI models trained with the method showed a 12% average improvement in personalised localization. In some settings, especially with pseudo-naming, gains reached 21%. Importantly, this enhanced ability did not degrade the model’s overall object recognition performance.

Potential applications include smart home cameras recognising your pet, assistive devices helping visually impaired users find items, robotics, surveillance, and ecological monitoring (e.g. tracking particular animals). The approach helps models better generalise from a few example images rather than needing full retraining for each new object.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adaptive optics meets AI for cellular-scale eye care

AI is moving from lab demos to frontline eye care, with clinicians using algorithms alongside routine fundus photos to spot disease before symptoms appear. The aim is simple: catch diabetic retinopathy early enough to prevent avoidable vision loss and speed referrals for treatment.

New imaging workflows pair adaptive optics with machine learning to shrink scan times from hours to minutes while preserving single-cell detail. At the US National Eye Institute, models recover retinal pigment epithelium features and clean noisy OCT data to make standard scans more informative.

Duke University’s open-source DCAOSLO goes further by combining multiplexed light signals with AI to capture cellular-scale images quickly. The approach eases patient strain and raises the odds of getting diagnostic-quality data in busy clinics.

Clinic-ready diagnostics are already changing triage. LumineticsCore, the first FDA-cleared AI to detect more-than-mild diabetic retinopathy from primary-care images, flags who needs urgent referral in seconds, enabling earlier laser or pharmacologic therapy.

Researchers also see the retina as a window on wider health, linking vascular and choroidal biomarkers to diabetes, hypertension and cardiovascular risk. Standardised AI tools promise more reproducible reads, support for trials and, ultimately, home-based monitoring that extends specialist insight beyond the clinic.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI system links hidden signals in patient records to improve diagnosis

Researchers at Mount Sinai and UC Irvine have developed a novel AI system, InfEHR, which creates a dynamic network of an individual’s medical events and relationships over time. The system detects disease patterns that traditional approaches often miss.

InfEHR transforms time-ordered data, visits, labs, medications, and vital signs, into a graphical network for each patient. It then learns which combinations of clues across that network tend to correlate with hidden disease states.

In testing, with only a few physician-annotated examples, the AI system identified neonatal sepsis without positive blood cultures at rates 12–16× higher than current methods, and post-operative kidney injury with 4–7× more sensitivity than baseline clinical rules.

As a safety feature, InfEHR can also respond ‘not sure’ when the record lacks enough signal, reducing the risk of overconfident errors.

Because it adapts its reasoning per patient rather than applying the same rules to all, InfEHR shows promise for personalized diagnostics across hospitals and populations, even with relatively small annotated datasets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Capita hit with £14 million fine after major data breach

The UK outsourcing firm Capita has been fined £14 million after a cyber-attack exposed the personal data of 6.6 million people. Sensitive information, including financial details, home addresses, passport images, and criminal records, was compromised.

Initially, the fine was £45 million, but it was reduced after Capita improved its cybersecurity, supported affected individuals, and engaged with regulators.

A breach that affected 325 of the 600 pension schemes Capita manages, highlighting risks for organisations handling large-scale sensitive data.

The Information Commissioner’s Office (ICO) criticised Capita for failing to secure personal information, emphasising that proper security measures could have prevented the incident.

Experts note that holding companies financially accountable reinforces the importance of data protection and sends a message to the market.

Capita’s CEO said the company has strengthened its cyber defences and remains vigilant to prevent future breaches.

The UK government has advised companies like Capita to prepare contingency plans following a rise in nationally significant cyberattacks, a trend also seen at Co-op, M&S, Harrods, and Jaguar Land Rover earlier in the year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft to support UAE investment analytics with responsible AI tools

The UAE Ministry of Investment and Microsoft signed a Memorandum of Understanding at GITEX Global 2025 to apply AI to investment analytics, financial forecasting, and retail optimisation. The deal aims to strengthen data governance across the investment ecosystem.

Under the MoU, Microsoft will support upskilling through its AI National Skilling Initiative, targeting 100,000 government employees. Training will focus on practical adoption, responsible use, and measurable outcomes, in line with the UAE’s National AI Strategy 2031.

Both parties will promote best practices in data management using Azure services such as Data Catalog and Purview. Workshops and knowledge-sharing sessions with local experts will standardise governance. Strong controls are positioned as the foundation for trustworthy AI at scale.

The agreement was signed by His Excellency Mohammad Alhawi and Amr Kamel. Officials say the collaboration will embed AI agents into workflows while maintaining compliance. Investment teams are expected to gain real-time insights and automation that shorten the time to action.

The partnership supports the ambition to make the UAE a leader in AI-enabled investment. It also signals deeper public–private collaboration on sovereign capabilities. With skills, standards, and use cases in place, the ministry aims to attract capital and accelerate diversification.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Scaling a cell ‘language’ model yields new immunotherapy leads

Yale University and Google unveiled Cell2Sentence-Scale 27B, a 27-billion-parameter model built on Gemma to decode the ‘language’ of cells. The system generated a novel hypothesis about cancer cell behaviour, and CEO Sundar Pichai called it ‘an exciting milestone’ for AI in science.

The work targets a core problem in immunotherapy: many tumours are ‘cold’ and evade immune detection. Making them visible requires boosting antigen presentation. C2S-Scale sought a ‘conditional amplifier’ drug that boosts signals only in immune-context-positive settings.

Smaller models lacked the reasoning to solve the problem, but scaling to 27B parameters unlocked the capability. The team then simulated 4,000 drugs across patient samples. The model flagged context-specific boosters of antigen presentation, with 10–30% already known and the rest entirely novel.

Researchers emphasise that conditional amplification aims to raise immune signals only where key proteins are present. That could reduce off-target effects and make ‘cold’ tumours discoverable. The result hints at AI-guided routes to more precise cancer therapies.

Google has released C2S-Scale 27B on GitHub and Hugging Face for the community to explore. The approach blends large-scale language modelling with cell biology, signalling a new toolkit for hypothesis generation, drug prioritisation, and patient-relevant testing.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam unveils draft AI law inspired by EU model

Vietnam is preparing to become one of Asia’s first nations with a dedicated AI law, following the release of a draft bill that mirrors key elements of the EU’s AI Act. The proposal aims to consolidate rules for AI use, strengthen rights protections and promote innovation.

The law introduces a four-tier system for classifying risks, from banned applications such as manipulative facial recognition to low-risk uses subject to voluntary standards. High-risk systems, including those in healthcare or finance, would require registration, oversight and incident reporting to a national database.

Under the law, companies deploying powerful general-purpose AI models must meet strict transparency, safety and intellectual property standards. The law would create a National AI Commission and a National AI Development Fund to support local research, sandboxes and tax incentives for emerging businesses.

Violations involving unsafe AI systems could lead to revenue-based fines and suspensions. The phased rollout begins in January 2026, with full compliance for high-risk systems expected by mid-2027. The government of Vietnam says the initiative reflects its ambition to build a trustworthy AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quebec man fined for using AI-generated evidence in court

A Quebec court has fined Jean Laprade C$5,000 (US$3,562) for submitting AI-generated content as part of his legal defence. Justice Luc Morin described the move as ‘highly reprehensible,’ warning that it could undermine the integrity of the judicial system.

The case concerned a dispute over a contract for three helicopters and an airplane in Guinea, where a clerical error awarded Laprade a more valuable aircraft than agreed. He resisted attempts by aviation companies to recover it, and a 2021 Paris arbitration ruling ordered him to pay C$2.7 million.

Laprade submitted fabricated AI-generated materials, including non-existent legal citations and inconsistent conclusions, in an attempt to strengthen his defence.

The judge emphasised that AI-generated information must be carefully controlled by humans, and the filing of legal documents remains a solemn responsibility. Morin acknowledged the growing influence of AI in courts but stressed the dangers of misuse.

While noting Laprade’s self-representation, the judge condemned his use of ‘hallucinated’ AI evidence and warned of future challenges from AI in courts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Humanity AI launches $500M initiative to build a people-centred future

A coalition of ten leading philanthropic foundations has pledged $500 million over five years to ensure that AI evolves in ways that strengthen humanity rather than marginalise it.

The initiative, called Humanity AI, brings together organisations such as the Ford, MacArthur, Mellon, and Mozilla foundations to promote a people-driven vision for AI that enhances creativity, democracy, and security.

As AI increasingly shapes every aspect of daily life, the coalition seeks to place citizens at the centre of the conversation instead of leaving decisions to a few technology firms.

It plans to support new research, advocacy, and partnerships that safeguard democratic rights, protect creative ownership, and promote equitable access to education and employment.

The initiative also prioritises the ethical use of AI in safety and economic systems, ensuring innovation does not come at the expense of human welfare.

John Palfrey, president of the MacArthur Foundation, said Humanity AI aims to shift power back to the public by funding technologists and advocates committed to responsible innovation.

Michele Jawando of the Omidyar Network added that the future of AI should be designed by people collectively, not predetermined by algorithms or corporate agendas.

Rockefeller Philanthropy Advisors will oversee the fund, which begins issuing grants in 2026. Humanity AI invites additional partners to join in creating a future where people shape technology instead of being shaped by it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!