Quantum and supercomputing converge in IBM-AMD initiative

IBM has announced plans to develop next-generation computing architectures by integrating quantum computers with high-performance computing, a concept it calls quantum-centric supercomputing.

The company is working with AMD to build scalable, open-source platforms that combine IBM’s quantum expertise with AMD’s strength in HPC and AI accelerators. The aim is to move beyond the limits of traditional computing and explore solutions to problems that classical systems cannot address alone.

Quantum computing uses qubits governed by quantum mechanics, offering a far richer computational space than binary bits. In a hybrid model, quantum machines could simulate atoms and molecules, while supercomputers powered by CPUs, GPUs, and AI manage large-scale data analysis.

Arvind Krishna, IBM’s CEO, said the approach represents a new way of simulating the natural world. AMD’s Lisa Su described high-performance computing as foundational to tackling global challenges, noting the partnership could accelerate discovery and innovation.

An initial demonstration is planned for later this year, showing IBM quantum computers working with AMD technologies. Both companies say open-source ecosystems like Qiskit will be crucial to building new algorithms and advancing fault-tolerant quantum systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s influence puts Grok at the centre of AI bias debate

Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.

xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.

Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.

Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.

The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google dismisses false breach rumours as Gmail security concerns grow

Reports that Gmail suffered a massive breach have been dismissed by Google, which said rumours of warnings to 2.5 billion users were false.

In a Monday blog post, Google rejected claims that it had issued global notifications about a serious Gmail security issue. It stressed that its protections remain effective against phishing and malware.

Confusion stems from a June incident involving a Salesforce server, during which attackers briefly accessed public business information, including names and contact details. Google said all affected parties were notified by early August.

The company acknowledged that phishing attempts are increasing, but clarified that Gmail’s defences block more than 99.9% of such attempts. A July blog post on phishing risks may have been misinterpreted as evidence of a breach.

Google urged users to remain vigilant, recommending password alternatives such as passkeys and regular account reviews. While the false alarm spurred unnecessary panic, security experts noted that updating credentials remains good practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK institutions embrace enterprise AI through global tech alliance

Microsoft, Accenture, and Avanade are deepening their 25-year partnership to bring AI into some of the UK’s most vital sectors, including healthcare and finance. NHS England is piloting AI-powered tools to streamline patient services and cut down on time-consuming administrative tasks, while Nationwide Building Society is deploying machine learning to improve customer services, speed up mortgage approvals, and enhance fraud detection.

The three companies have different responsibilities in tackling the challenges of enterprise AI. Microsoft provides the Azure cloud platform and pre-built AI models, Accenture contributes sector-specific expertise and governance frameworks, and Avanade integrates the technology into existing systems and workflows. That structure helps organisations move beyond experimental AI pilots and scale solutions reliably in highly regulated industries.

Unlike consumer applications, enterprise AI must meet strict compliance requirements, especially concerning sensitive patient data or financial transactions. The partnership emphasises embedding AI directly into day-to-day operations rather than treating it as an add-on, reducing disruption for staff and ensuring systems work seamlessly once live.

With regulators tightening oversight, the alliance highlights responsible AI as a key focus. By prioritising transparency, security, and ethical use, Microsoft, Accenture, and Avanade are positioning their collaboration as a blueprint for how AI can be adopted across critical institutions without compromising trust or reliability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI oversight and audits at core of Pakistan’s security plan

Pakistan plans to roll out AI-driven cybersecurity systems to monitor and respond to attacks on critical infrastructure and sensitive data in real time. Documents from the Ministry for Information Technology outline a framework to integrate AI into every stage of security operations.

The initiative will enforce protocols like secure data storage, sandbox testing, and collaborative intelligence sharing. Human oversight will remain mandatory, with public sector AI deployments registered and subject to transparency requirements.

Audits and impact assessments will ensure compliance with evolving standards, backed by legal penalties for breaches. A national policy on data security will define authentication, auditing, and layered defence strategies across network, host, and application levels.

New governance measures include identity management policies with multi-factor authentication, role-based controls, and secure frameworks for open-source AI. AI-powered simulations will help anticipate threats, while regulatory guidelines address risks from disinformation and generative AI.

Regulatory sandboxes will allow enterprises in Pakistan to test systems under controlled conditions, with at least 20 firms expected to benefit by 2027. Officials say the measures will balance innovation with security, safeguarding infrastructure and citizens.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Beijing seeks to curb excess AI investment while sustaining growth

China has pledged to rein in excessive competition in AI, signalling Beijing’s desire to avoid wasteful investment while keeping the technology central to its economic strategy.

The National Development and Reform Commission stated that provinces should develop AI in a coordinated manner, leveraging local strengths to prevent duplication and overlap. Officials in China emphasised the importance of orderly flows of talent, capital, and resources.

The move follows President Xi Jinping’s warnings about unchecked local investment. Authorities aim to prevent overcapacity problems, such as those seen in electric vehicles, which have fueled deflationary pressures in other industries.

While global investment in data centres has surged, Beijing is adopting a calibrated approach. The state also vowed stronger national planning and support for private firms, aiming to nurture new domestic leaders in AI.

At the same time, policymakers are pushing to attract private capital into traditional sectors, while considering more central spending on social projects to ease local government debt burdens and stimulate long-term consumption.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salt Typhoon hack reveals fragility of global communications networks

The FBI has warned that Chinese hackers are exploiting structural weaknesses in global telecom infrastructure, following the Salt Typhoon incident that penetrated US networks on an unprecedented scale. Officials say the Beijing-linked group has compromised data from millions of Americans since 2019.

Unlike previous cyber campaigns focused narrowly on government targets, Salt Typhoon’s intrusions exposed how ordinary mobile users can be swept up in espionage. Call records, internet traffic, and even geolocation data were siphoned from carriers, with the operation spreading to more than 80 countries.

Investigators linked the campaign to three Chinese tech firms supplying products to intelligence agencies and China’s People’s Liberation Army. Experts warn that the attacks demonstrate the fragility of cross-border telecom systems, where a single compromised provider can expose entire networks.

US and allied agencies have urged providers to harden defences with encryption and stricter monitoring. Analysts caution that global telecoms will continue to be fertile ground for state-backed groups without structural reforms.

The revelations have intensified geopolitical tensions, with the FBI describing Salt Typhoon as one of the most reckless and far-reaching espionage operations ever detected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Legal barriers and low interest delay Estonia’s AI rollout in schools

Estonia’s government-backed AI teaching tool, developed under the €1 million TI-Leap programme, faces hurdles before reaching schools. Legal restrictions and waning student interest have delayed its planned September rollout.

Officials in Estonia stress that regulations to protect minors’ data remain incomplete. To ensure compliance, the Ministry of Education is drafting changes to the Basic Schools and Upper Secondary Schools Act.

Yet, engagement may prove to be the bigger challenge. Developers note students already use mainstream AI for homework, while the state model is designed to guide reasoning rather than supply direct answers.

Educators say success will depend on usefulness. The AI will be piloted in 10th and 11th grades, alongside teacher training, as studies have shown that more than 60% of students already rely on AI tools.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Estonia’s Vocal Image uses AI to boost communication skills

Estonia-based startup Vocal Image is deploying AI to help people improve their vocal and communication skills. Its app features an interactive library of tongue twisters, breathing exercises and suggestions for gestures, all enhanced with automated feedback and personalised coaching tips.

Led by CEO Nick Lahoika, the company has scaled rapidly, achieving upwards of 4 million downloads and serving approximately 160,000 active users.

Vocal Image positions itself as an affordable, mobile-first alternative to traditional one-on-one voice training, rooted in Lahoika’s own journey overcoming speaking anxiety.

The app’s design enables users to practice at home with privacy and convenience, offering daily, bite-sized lessons informed by AI that assess strengths, suggest improvements and nurture confidence with no need for human instructors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!