China pushes quantum computing towards industrial use

A Chinese startup has used quantum computing to improve breast cancer screening accuracy, highlighting how the technology could transform medical diagnostics—based in Hefei, Origin Quantum applied its superconducting quantum processor to analyse medical images faster and more precisely.

China is accelerating efforts to turn quantum research into industrial applications, with companies focusing on areas such as drug discovery, smart cities and finance. Government backing and national policy have driven rapid growth in the sector, with over 150 firms now active in quantum computing.

In addition to medical uses, quantum algorithms are being tested in autonomous parking, which has dramatically cut wait times. Banks and telecom firms have also begun adopting quantum solutions to improve operational efficiency in areas like staff scheduling.

The merging of quantum computing with AI is seen as the next significant step, with Origin Quantum recently fine-tuning a billion-parameter AI model on its quantum system. Experts expect the integration of these technologies to shift from labs to practical use in the next five years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Solana teams up with Kazakhstan to grow crypto startups

Solana has signed a Memorandum of Understanding with Kazakhstan’s Ministry to support the country’s growing crypto sector. The partnership aims to advance startups and improve developer education using the Solana blockchain.

The collaboration aims to promote the tokenisation of capital markets, enhancing the appeal of Kazakhstan’s Astana International Exchange (AIX) to global investors.

Solana Foundation leaders highlighted how blockchain technology could help AIX compete with major exchanges such as the NYSE and Nasdaq by storing most trading volume on-chain.

The announcement comes shortly after Kazakhstan launched the Solana Economic Zone, the first in Central Asia. Digital minister Zhaslan Madiyev called the initiative a step towards fostering web3 talent and advancing Kazakhstan’s digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tether CEO unveils offline password manager

Paolo Ardoino, CEO of Tether, has introduced PearPass, an open-source, offline password manager. The launch comes in response to the most significant credential breach on record, which exposed 16 billion passwords.

Ardoino criticised cloud storage, stating the time has come to abandon reliance on it for security.

The leaked data reportedly covers login details from major platforms like Apple, Meta, and Google, leaving billions vulnerable to identity theft and fraud. Experts have not yet identified the perpetrators but point to systemic flaws in cloud-based data protection.

PearPass is designed to operate entirely offline, storing credentials only on users’ devices without syncing to the internet or central servers. It aims to reduce the risks of mass hacking attempts targeting large cloud vaults.

The tool’s open-source nature allows transparency and encourages the adoption of safer, decentralised security methods.

Cybersecurity authorities urge users to change passwords immediately, enable multi-factor authentication, and monitor accounts closely.

As investigations proceed, PearPass’s launch renews the debate on personal data ownership and may set a new standard for password security.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin holds firm as tensions rise in the Gulf

Oil markets are on edge after US airstrikes hit three of Iran’s nuclear sites, raising fears of disruption to the Strait of Hormuz. The narrow passage is vital for about 20% of the world’s oil supply.

Any obstruction could drive crude prices up to $130 per barrel and intensify global inflation pressures.

Despite the joint strikes by the US and Israel, Brent crude remains stable for now, hovering near $72 per barrel. Traders are closely watching Iran’s next move and whether shipping through the Strait will be affected.

Bitcoin, in contrast, has shown remarkable resilience. Trading above $102,600, the leading cryptocurrency has not reacted to the military escalation, reinforcing its role as a safe-haven asset during geopolitical uncertainty.

With its fixed supply and decentralised structure, Bitcoin is increasingly being seen as a hedge against inflation and instability. Its steady price amid market anxiety highlights the growing confidence in crypto during global crises.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple considers buying Perplexity AI

Apple is reportedly considering the acquisition of Perplexity AI as it attempts to catch up in the fast-moving race for dominance in generative technology.

According to Bloomberg, the discussions involve senior executives, including Eddy Cue and merger head Adrian Perica, who remain at an early stage.

Such a move would significantly shift Apple, which typically avoids large-scale takeovers. However, with investor pressure mounting after an underwhelming developer conference, the tech giant may rethink its traditionally cautious acquisition strategy.

Perplexity has gained prominence for its fast, clear AI chatbot and recently secured funding at a $14 billion valuation.

Should Apple proceed, the acquisition would be the company’s largest ever financially and strategically, potentially transforming its position in AI and reducing its long-standing dependence on Google’s search infrastructure.

Apple’s slow development of Siri and reliance on a $20 billion revenue-sharing deal with Google have left it trailing rivals. With that partnership now under regulatory scrutiny in the US, Apple may view Perplexity as a vital step towards building a more autonomous search and AI ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Parliamentarians at IGF 2025 call for action on information integrity

At the Internet Governance Forum 2025 in Lillestrøm, Norway, global lawmakers and experts gathered to confront one of the most pressing challenges of our digital era: the societal impact of misinformation and disinformation, especially amid the rapid advance of AI. Framed by the UN Global Principles for Information Integrity, the session spotlighted the urgent need for resilient, democratic responses to online erosion of public trust.

AI’s disruptive power took centre stage, with speakers citing alarming trends—deepfakes manipulated global election narratives in over a third of national polls in 2024 alone. Experts like Lindsay Gorman from the German Marshall Fund warned of a polluted digital ecosystem where fabricated video and audio now threaten core democratic processes.

UNESCO’s Marjorie Buchser expanded the concern, noting that generative AI enables manipulation and redefines how people access information, often diverting users from traditional journalism toward context-stripped AI outputs. However, regulation alone was not touted as a panacea.

Instead, panellists promoted ‘democracy-affirming technologies’ that embed transparency, accountability, and human rights at their foundation. The conversation urged greater investment in open, diverse digital ecosystems, particularly those supporting low-resource languages and underrepresented cultures. At the same time, multiple voices called for more equitable research, warning that Western-centric data and governance models skew current efforts.

In the end, a recurring theme echoed across the room: tackling information manipulation is a collective endeavour that demands multistakeholder cooperation. From enforcing technical standards to amplifying independent journalism and bolstering AI literacy, participants called for governments, civil society, and the tech industry to build unified, future-proof solutions that protect democratic integrity while preserving the fundamental right to free expression.

Track all key moments from the Internet Governance Forum 2025 on our dedicated IGF page.

Tailored AI agents improve work output—at a social cost

AI agents can significantly improve workplace productivity when tailored to individual personality types, according to new research from the Massachusetts Institute of Technology (MIT). However, the study also found that increased efficiency may come at the expense of human social interaction.

Led by Professor Sinan Aral and postdoctoral associate Harang Ju from MIT Sloan School of Management, the research revealed that human workers collaborating with AI agents completed tasks 60% more efficiently. This gain was partly attributed to a 23% reduction in social messages between team members.

The findings come amid a surge in the adoption of AI agents. A recent PwC survey found that 79% of senior executives had implemented AI agents in their organisations, with 66% reporting productivity gains. Agents are used in roles ranging from customer support to executive assistance and data analysis.

Aral and Ju developed a platform called Pairit (formerly MindMeld) to examine how AI affects team dynamics. In one of their experiments, over 2,000 participants were randomly assigned to human-only teams or teams mixed with AI agents. The groups were tasked with creating advertisements for a think tank.

Teams that included AI agents produced more content and higher-quality ad copy, but their human members communicated less, especially regarding emotional and rapport-building messages.

The study also highlighted the importance of matching AI traits to human personalities. For example, conscientious humans worked more effectively with open AI agents, whereas extroverted humans underperformed when paired with highly conscientious AI counterparts.

‘AI traits can complement human personalities to enhance collaboration,’ the researchers noted. However, they stressed that the same AI assistant may not suit everyone.

The insight underpins the launch of their new venture, Pairium AI, which aims to develop agentic AI that adapts to individual work styles. The company promotes its mission as ‘personalising the Agentic Age.’

Ju emphasised the importance of compatibility: ‘You don’t work the same way with all colleagues—AI should adapt in the same way.’

Devanshu Mehrotra, an analyst at Gartner, described the research as groundbreaking. ‘This opens the door to a much deeper conversation about the hyper-customisation of AI in the workplace.’

Looking ahead, Aral and Ju plan to explore how personalised AI can assist in negotiations, customer support, creative writing and coding tasks. Their findings suggest fitting AI to the user may become as critical as managing human team dynamics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI bot now makes videos on X

Perplexity’s AI chatbot, now integrated with X (formerly Twitter), has introduced a feature that allows users to generate short AI-created videos with sound.

By tagging @AskPerplexity with a brief prompt, users receive eight-second clips featuring computer-generated visuals and audio, including dialogue. The move is as a potential driver of engagement on the Elon Musk-owned platform.

However, concerns have emerged over the possibility of misinformation spreading more easily. Perplexity claims to have installed strong filters to limit abuse, but X’s poor content moderation continues to fuel scepticism.

The feature has already been used to create imaginative videos involving public figures, sparking debates around ethical use.

The competition between Perplexity’s ‘Ask’ bot and Musk’s Grok AI is intensifying, with the former taking the lead in multimedia capabilities. Despite its popularity on X, Grok does not currently support video generation.

Meanwhile, Perplexity is expanding to other platforms, including WhatsApp, offering AI services directly without requiring a separate app or registration.

Legal troubles have also surfaced. The BBC is threatening legal action against Perplexity over alleged unauthorised use of its content for AI training. In a strongly worded letter, the broadcaster has demanded content deletion, compensation, and a halt to further scraping.

Perplexity dismissed the claims as manipulative, accusing the BBC of misunderstanding technology and copyright law.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elon Musk wants Grok AI to replace historical facts

Elon Musk has revealed plans to retrain his Grok AI model by rewriting human knowledge, claiming current training datasets contain too much ‘garbage’ and unchecked errors.

He stated that Grok 3.5 would be designed for ‘advanced reasoning’ and tasked with correcting historical inaccuracies before using the revised corpus to retrain itself.

Musk, who has criticised other AI systems like ChatGPT for being ‘politically correct’ and biassed, wants Grok to be ‘anti-woke’ instead.

His stance echoes his earlier approach to X, where he relaxed content moderation and introduced a Community Notes feature in response to the platform being flooded with misinformation and conspiracy theories after his takeover.

The proposal has drawn fierce criticism from academics and AI experts. Gary Marcus called the plan ‘straight out of 1984’, accusing Musk of rewriting history to suit personal beliefs.

Logic professor Bernardino Sassoli de’ Bianchi warned the idea posed a dangerous precedent where ideology overrides truth, calling it ‘narrative control, not innovation’.

Musk also urged users on X to submit ‘politically incorrect but factually true’ content to help train Grok.

The move quickly attracted falsehoods and debunked conspiracies, including Holocaust distortion, anti-vaccine claims and pseudoscientific racism, raising alarms about the real risks of curating AI data based on subjective ideas of truth.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn users still hesitate to use AI writing tools

LinkedIn users have readily embraced AI in many areas, but one feature has not taken off as expected — AI-generated writing suggestions for posts.

CEO Ryan Roslansky admitted to Bloomberg that the tool’s popularity has fallen short, likely due to the platform’s professional nature and the risk of reputational damage.

Unlike casual platforms such as X or TikTok, LinkedIn posts often serve as an extension of users’ résumés. Roslansky explained that being called out for using AI-generated content on LinkedIn could damage someone’s career prospects, making users more cautious about automation.

LinkedIn has seen explosive growth in AI-related job demand and skills despite the hesitation around AI-assisted writing. The number of roles requiring AI knowledge has increased sixfold in the past year, while user profiles listing such skills have jumped twentyfold.

Roslansky also shared that he relies on AI when communicating with his boss, Microsoft CEO Satya Nadella. Before sending an email, he uses Copilot to ensure it reflects the polished, insightful tone he calls ‘Satya-smart.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!