Microsoft Exchange Online outage affects users globally

A service disruption has affected users of Microsoft Exchange Online, and Microsoft has confirmed ongoing investigations into mailbox access issues affecting enterprise customers worldwide.

Reports indicate that Microsoft users encountered difficulties connecting via multiple access points, including the Microsoft Outlook desktop and mobile applications and browser-based email services. The issue affects specific connection methods rather than the entire platform.

Organisations relying on cloud-based communication tools experienced interruptions in email workflows, calendar scheduling, and shared mailbox functionality. Such disruptions can significantly disrupt operational continuity, particularly for businesses that depend on real-time communication systems.

Updates through Microsoft’s service health channels suggest that engineering teams are working to identify the root cause, though no definitive explanation has yet been provided.

Such incidents highlight broader concerns around resilience in cloud infrastructure, as enterprises increasingly depend on centralised platforms for critical communication services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

NVIDIA expands physical AI ecosystem to accelerate real world robotics

Partnerships across the robotics sector are positioning NVIDIA at the centre of what is increasingly described as ‘physical AI’, a shift towards intelligent machines capable of perceiving, reasoning and acting in real environments.

A new generation of tools, including NVIDIA Cosmos world models and updated NVIDIA Isaac simulation frameworks, aims to support developers in training and validating robots before deployment.

These systems enable companies to simulate complex environments, reducing the risks and costs of real-world testing.

Industrial robotics leaders such as ABB Robotics, KUKA, and FANUC are integrating NVIDIA technologies into digital twin environments, enabling more accurate modelling of production lines and automation systems.

Advances are also extending into humanoid robotics, where companies are using AI models to develop machines capable of more flexible and adaptive behaviour.

New foundation models, including GR00T systems, are designed to give robots general-purpose capabilities instead of limiting them to specific tasks.

Healthcare and logistics represent additional areas of deployment, with robotics platforms being tested in surgical systems, warehouse automation and manufacturing environments. These applications highlight how physical AI could reshape industries requiring precision, safety and scalability.

Growing collaboration across cloud providers, manufacturers and AI developers suggests that robotics is moving toward a more integrated ecosystem, where simulation, data generation and deployment are increasingly interconnected.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!  

6G will make wireless networks capable of thinking for themselves

Unlike its predecessors, 6G is being designed from the ground up with AI as a core feature rather than a performance add-on.

From user devices and base stations through to the network core, AI and machine learning will enable 6G networks to self-optimise, manage interference, predict user mobility, and make real-time decisions with minimal human intervention.

One of 6G’s most distinctive capabilities will be Integrated Sensing and Communication (ISAC), which allows radio signals to simultaneously carry data and sense the surrounding environment, effectively turning the network into a vast, distributed sensor capable of detecting motion, tracking objects, and supporting applications such as predictive maintenance and autonomous vehicles.

AI plays a central role in interpreting this sensing data in real time, enabling split-second responses to real-world conditions.

Standardisation efforts are already underway, with 3GPP’s Release 20 exploring how AI and machine learning can optimise the air interface and improve tasks such as channel state information compression.

Commercial 6G deployment is expected in the early 2030s, by which point AI is projected to act as the brain and nervous system of key parts of the network, constantly learning, adapting, and optimising with little human oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How IBM is making quantum-centric supercomputing accessible to scientists

IBM has published a detailed reference architecture for quantum-centric supercomputing, providing a blueprint for integrating quantum processing units (QPUs) into existing high-performance computing (HPC) infrastructure without disruptive changes to current systems.

The release marks a significant step toward realising the vision articulated by physicist Richard Feynman, who argued decades ago that accurately simulating nature would require quantum-mechanical computation.

The architecture describes how quantum and classical systems, including CPUs, GPUs, and QPUs, can work together across multiple layers, from application and middleware tools such as Qiskit and CUDA through to resource management systems that orchestrate workloads in real time.

New algorithms such as Sample-based Krylov Quantum Diagonalisation (SKQD) are already demonstrating cases where quantum-centric workflows outperform leading classical-only methods, including in molecular ground-state energy calculations, where classical techniques failed to converge.

Real-world research applications are already emerging.

Scientists at the Cleveland Clinic Foundation used quantum-centric methods to simulate a 300-atom protein, the largest molecular simulation to date. In contrast, a team spanning IBM, Oxford, ETH Zurich, and other institutions used quantum algorithms to study a newly engineered ‘half-Möbius’ molecule whose electronic structure cannot easily be modelled classically.

IBM describes the trajectory as pointing toward a near future in which quantum computing can predict molecular properties that scientists can then bring to life in the laboratory.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Britain targets quantum leadership with £1bn investment

UK Secretary of State for Science, Innovation and Technology Liz Kendall has announced a £1bn funding package to boost UK quantum computing and retain domestic talent.

The initiative reflects growing concern over the country’s ability to compete globally, particularly after the US established dominance in AI.

Officials emphasised the need to retain British startups, engineers, and researchers who often relocate abroad in search of better funding and scaling opportunities. The UK produces top talent, but Google and OpenAI own many leading firms.

The investment will support the development of large-scale quantum computers for use across science, industry, and the public sector. Another £1bn will fund real-world use in finance, pharmaceuticals, and energy.

The government aims to build a fully operational domestic quantum system by the early 2030s.

Quantum computing uses qubits that can exist in multiple states simultaneously, enabling far greater computational power than classical systems. Fully fault-tolerant machines are still in development, but the technology could drive advances in drug discovery, materials science, and complex modelling.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI tool could help detect domestic violence risk years earlier

Researchers in the United States have developed an AI system designed to help doctors identify patients who may be at risk of intimate partner violence. The tool analyses hospital data to detect patterns associated with abuse, potentially enabling healthcare professionals to intervene earlier.

Intimate partner violence refers to abuse from current or former partners and can lead to serious injuries, chronic pain, and long-term mental health problems. According to the European Commission, 18 percent of women who have had a partner reported experiencing physical or sexual violence from a partner in 2021.

The study, published in the journal Nature, examined hospital records from nearly 850 women who had experienced intimate partner violence and more than 5,200 similar patients in a control group. Researchers used the data to train three different machine learning systems to detect patterns associated with abuse.

One model analysed structured hospital data, such as age and medical history. A second model examined written clinical notes, including doctors’ observations and radiology reports. A third system combined both data types and achieved the strongest results, correctly identifying risk in 88 percent of cases.

Researchers found that the system could flag potential abuse more than three years before some patients later entered hospital-based intervention programmes. By analysing large datasets, the tool can detect patterns of physical trauma linked to abuse and alert clinicians so they can approach the issue carefully and offer support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Human made labels emerge as industries react to AI expansion

Organisations around the world are developing certification labels designed to show that products or creative work were made by humans rather than AI. New badges such as ‘Human made’, ‘AI free’ and ‘Proudly Human’ are appearing across books, films, marketing and websites as industries respond to the rapid spread of AI tools.

At least eight initiatives are now attempting to create a label that could achieve global recognition similar to the Fair Trade mark. Experts warn that competing definitions and inconsistent certification systems could confuse consumers unless a universal standard is agreed upon.

Some schemes allow creators to download AI-free badges with little or no verification, while others use paid auditing processes that rely on analysts and AI detection tools. Researchers note that defining ‘human-made’ is increasingly difficult because AI technologies are embedded in many everyday software tools.

Creative industries are at the centre of the debate as generative AI rapidly produces books, films and music at lower cost and higher speed. Advocates of certification argue that verified human-created content may gain greater value if consumers can clearly distinguish it from AI-generated work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tinder tests AI matchmaking features for modern dating

Popular dating platform Tinder is testing a new AI-powered feature called ‘Chemistry’ designed to improve matchmaking. The tool analyses user profiles to identify more relevant connections while the app’s familiar swipe system remains central to the experience.

Developed by parent company Match Group, the feature uses AI to understand personality traits, interests and preferences through profile data. Future updates may allow users to answer questionnaires or share photo archives to refine recommendations.

Additional modes are also being introduced to further personalise matches. Music preferences and astrology signs can now influence suggested profiles, reflecting evolving trends among younger online daters.

The platform is also testing in-person events and virtual video speed dating to encourage real-world interaction. AI moderation tools are also being deployed, helping detect inappropriate messages and verify that profiles belong to real people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbotf

Security warning issued over OpenClaw AI agent

Cybersecurity authorities have warned that vulnerabilities in the OpenClaw AI agent could expose sensitive data. Officials in China say weak default security settings may allow attackers to exploit the system.

Experts in China warned that prompt injection attacks could manipulate OpenClaw when it accesses online content. Malicious instructions hidden in websites may cause the AI agent to reveal confidential information.

Researchers have also identified risks involving link previews in messaging apps such as Telegram and Discord. Investigators in China say attackers could trick the system into sending sensitive data to malicious websites.

Security specialists in China advise organisations to strengthen protections around AI agents. Recommendations include isolating systems, limiting network access and installing trusted software components only.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Seoul deepens ties with global AI developers

South Korea is pursuing a partnership with AI company Anthropic as part of a national strategy to strengthen technological capabilities. Officials are working toward a memorandum of understanding with the developer of the Claude AI system.

The initiative follows discussions between South Korea’s science minister and Anthropic’s chief executive, Dario Amodei, during an AI summit in New Delhi. Authorities are also preparing for the company’s planned office opening in the city in 2026.

Government leaders in South Korea have already expanded cooperation with OpenAI. Policymakers say the strategy aims to build ties with leading global AI developers while supporting domestic innovation.

Officials are also developing a homegrown AI foundation model with local companies. The programme forms part of a national plan to position the country among the world’s leading AI powers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot