Quantum-classical hybrid outperforms, according to HSBC and IBM study

HSBC and IBM have reported the first empirical evidence of the value of quantum computers in solving real-world problems in bond trading. Their joint trial showed a 34% improvement in predicting the likelihood of a trade being filled at a quoted price compared to classical-only techniques.

The trial used a hybrid approach that combined quantum and classical computing to optimise quote requests in over-the-counter bond markets. Production-scale trading data from the European corporate bond market was run on IBM quantum computers to predict winning probabilities.

The results demonstrate how quantum techniques can outperform standard methods in addressing the complex and dynamic factors in algorithmic bond trading. HSBC said the findings offer a competitive edge and could redefine how the financial industry prices customer inquiries.

Philip Intallura, HSBC Group Head of Quantum Technologies, called the trial ‘a ground-breaking world-first in bond trading’. He said the results show that quantum computing is on the cusp of delivering near-term value for financial services.

IBM’s latest Heron processor played a key role in the workflow, augmenting classical computation to uncover hidden pricing signals in noisy data. IBM said such work helps unlock new algorithms and applications that could transform industries as quantum systems scale.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Gatik and Loblaw to deploy 50 self-driving trucks in Canada

Autonomous logistics firm Gatik is set to expand its partnership with Loblaw, deploying 50 new self-driving trucks across North America over the next year. The move marks the largest autonomous truck deployment in the region to date.

The slow rollout of self-driving technology has frustrated supply chain watchers, with most firms still testing limited fleets. Gatik’s large-scale deployment signals a shift toward commercial adoption, with 20 trucks to be added by the end of 2025 and an additional 30 by 2026.

The partnership was enabled by Ontario’s Autonomous Commercial Motor Vehicle Pilot Program, a ten-year initiative allowing approved operators to test automated commercial trucks on public roads. Officials hope it will boost road safety and support the trucking sector.

Industry analysts note that North America’s truck driver shortage is one of the most pressing logistics challenges facing the region. Nearly 70% of logistics firms report that driver shortages hinder their ability to meet freight demand, making automation a viable solution to address this issue.

Gatik, operating in the US and Canada, says the deployment could ease labour pressure and improve efficiency, but safety remains a key concern. Experts caution that striking a balance between rapid rollout and robust oversight will be crucial for establishing trust in autonomous freight operations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-driven remote fetal monitoring launched by Lee Health

Lee Health has launched Florida’s first AI-powered birth care centre, introducing a remote fetal monitoring command hub to improve maternal and newborn outcomes across the Gulf Coast.

The system tracks temperature, heart rate, blood pressure, and pulse for mothers and babies, with AI alerting staff when vital signs deviate from normal ranges. Nurses remain in control but gain what Lee Health calls a ‘second set of eyes’.

‘Maybe mum’s blood pressure is high, maybe the baby’s heart rate is not looking great. We will be able to identify those things,’ said Jen Campbell, director of obstetrical services at Lee Health.

Once a mother checks in, the system immediately monitors across Lee Health’s network and sends data to the AI hub. AI cues trigger early alerts under certified clinician oversight and are aligned with Lee Health’s ethical AI policies, allowing staff to intervene before complications worsen.

Dr Cherrie Morris, vice president and chief physician executive for women’s services, said the hub strengthens patient safety by centralising monitoring and providing expert review from certified nurses across the network.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Secrets sprawl flagged as top software supply chain risk in Australia

Avocado Consulting urges Australian organisations to boost software supply chain security after a high-alert warning from the Australian Cyber Security Centre (ACSC). The alert flagged threats, including social engineering, stolen tokens, and manipulated software packages.

Dennis Baltazar of Avocado Consulting said attackers combine social engineering with living-off-the-land techniques, making attacks appear routine. He warned that secrets left across systems can turn small slips into major breaches.

Baltazar advised immediate audits to find unmanaged privileged accounts and non-human identities. He urged embedding security into workflows by using short-lived credentials, policy-as-code, and default secret detection to reduce incidents and increase development speed for users in Australia.

Avocado Consulting advises organisations to eliminate secrets from code and pipelines, rotate tokens frequently, and validate every software dependency by default using version pinning, integrity checks, and provenance verification. Monitoring CI/CD activity for anomalies can also help detect attacks early.

Failing to act could expose cryptographic keys, facilitate privilege escalation, and result in reputational and operational damage. Avocado Consulting states that secure development practices must become the default, with automated scanning and push protection integrated into the software development lifecycle.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government AI tool recovers £500m lost to fraud

A new AI system developed by the UK Cabinet Office has helped reclaim nearly £500m in fraudulent payments, marking the government’s most significant recovery of public funds in a single year.

The Fraud Risk Assessment Accelerator analyses data across government departments to identify weaknesses and prevent scams before they occur.

It uncovered unlawful council tax claims, social housing subletting, and pandemic-related fraud, including £186m linked to Covid support schemes. Ministers stated the savings would be redirected to fund nurses, teachers, and police officers.

Officials confirmed the tool will be licensed internationally, with the US, Canada, Australia, and New Zealand among the first partners expected to adopt it.

The UK announced the initiative at an anti-fraud summit with these countries, describing it as a step toward global cooperation in securing public finances through AI.

However, civil liberties groups have raised concerns about bias and oversight. Previous government AI systems used to detect welfare fraud were found to produce disparities based on age, disability, and nationality.

Campaigners warned that the expanded use of AI in fraud detection risks embedding unfair outcomes if left unchecked.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spanish joins Google’s global AI Mode expansion

Google is rapidly expanding AI Mode, its generative AI-powered search assistant. The company has announced that the feature is now rolling out globally in Spanish. Spanish speakers can now interact with AI Mode to ask complex questions that traditional Search handles poorly.

AI Mode has seen swift adoption since its launch earlier this year. First introduced in March, the feature was rolled out to users across the US in May, followed by its first language expansion earlier this month.

Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese were the first languages added, and Spanish now joins the list. Google says more languages will follow soon as part of its global AI Mode rollout.

Google says the feature is designed to work alongside Search, not replace it, offering conversational answers with links to supporting sources. The company has stressed that responses are generated with safety filters and fact-checking layers.

The rollout reflects Google’s broader strategy to integrate generative AI into its ecosystem, spanning Search, Workspace, and Android. AI Mode will evolve with multimodal support and tighter integration with other Google services.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI image war heats up as ByteDance unveils Seedream 4.0

ByteDance has unveiled Seedream 4.0, its latest AI-powered image generation model, which it claims outperforms Google DeepMind’s Gemini 2.5 Flash Image. The launch signals ByteDance’s bid to rival leading creative AI tools.

Developed by ByteDance’s Seed division, the model combines advanced text-to-image generation with fast, precise image editing. Internal testing reportedly showed superior prompt accuracy, image alignment, and visual quality compared to US-developed DeepMind’s system.

Artificial Analysis, an independent AI benchmarking firm, called Seedream 4.0 a significant step forward. The model integrates Seedream 3.0’s generation capability with SeedEdit 3.0’s editing tools while maintaining a price of US$30 per 1,000 generations.

ByteDance claims that Seedream 4.0 runs over 10 times faster than earlier versions, enhancing the user experience with near-instant image inference. Early users have praised its ability to make quick, text-prompted edits with high accuracy.

The tool is now available to users in China through Jimeng and Doubao AI apps and businesses via Volcano Engine, ByteDance’s cloud platform. A formal technical report supporting the company’s claims has not yet been released.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Yale students explore AI through clubs and fellowships

Across Yale, membership in AI-focused clubs such as the Yale Artificial Intelligence Association (AIA), Yale Artificial Intelligence Alignment (YAIA) and Yale Artificial Intelligence Policy Initiative (YAIPI) has grown rapidly.

The organisations offer weekly meetings, projects, and fellowships to deepen understanding of AI’s technical, ethical, and societal implications.

Each club has a distinct focus. YAIA addresses long-term risks and safety, while the AIA emphasises student-led technical projects and community-building. YAIPI explores ethics, governance and policy, particularly for students without technical backgrounds.

Fellowships, paper-reading groups and collaborative projects allow members to engage deeply with AI issues.

Membership numbers reflect this surge: AIA’s mailing list now includes around 400 students, YAIPI has over 200 subscribers, and YAIA admitted 25 students to its safety fellowship. The clubs are also beginning to collaborate, combining technical expertise with policy knowledge for joint projects.

Professional schools and faculty-led initiatives, including law and business-focused AI groups, further expand opportunities for student engagement.

AI’s role in classrooms remains varied. Some professors encourage experimentation with generative tools, while others enforce stricter rules, particularly in humanities courses. Yale’s Executive Committee warned first-year students against using AI platforms like ChatGPT without attribution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini brings conversational AI to Google TV

Google has launched Gemini for TV, bringing conversational AI to the living room. The update builds on Google TV and Google Assistant, letting viewers chat naturally with their screens to discover shows, plan trips, or even tackle homework questions.

Instead of scrolling endlessly, users can ask Gemini to find a film everyone will enjoy or recap last season’s drama. The AI can handle vague requests, like finding ‘that new hospital drama,’ and provide reviews before you press play.

Gemini also turns the TV into an interactive learning tool. From explaining why volcanoes erupt to guiding kids through projects, it offers helpful answers with supporting YouTube videos for hands-on exploration.

Beyond schoolwork, Gemini can help plan meals, teach new skills like guitar, or brainstorm family trips, all through conversational prompts. Such features make the TV a hub for entertainment, education, and inspiration.

Gemini is now available on the TCL QM9K series, with rollout to additional Google TV devices planned for later this year. Google says additional features are coming soon, making TVs more capable and personalised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!