Microsoft to supply AI tools to federal agencies in a cost-saving pact

The US General Services Administration (GSA) has agreed on a significant deal with Microsoft to provide federal agencies with discounted access to its AI and cloud tools suite.

Instead of managing separate contracts, the government-wide pact offers unified pricing on products including Microsoft 365, the Copilot AI assistant, and Azure cloud services, potentially saving agencies up to $3.1 billion in its first year.

The arrangement is designed to accelerate AI adoption and digital transformation across the federal government. It includes free access to the generative AI chatbot Microsoft 365 Copilot for up to 12 months, alongside discounts on cybersecurity tools and Dynamics 365.

Agencies can opt into any of the offers through September next year.

The deal leverages the federal government’s collective purchasing power to reduce costs and foster innovation.

It delivers on a White House AI action plan and follows similar arrangements the GSA announced last month with other tech giants, including Google, Amazon Web Services, and OpenAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gemini upgrade for Google Home coming soon

An upcoming upgrade for Google Home devices is set to bring a new AI assistant, Gemini, to the smart home ecosystem. A recent post by the Made by Google account on X revealed that more details will be announced on 1 October.

The move follows months of user complaints about Google Home’s performance, including issues with connectivity and the assistant’s failure to recognise basic commands.

With Gemini’s superior ability to understand natural language, the upgrade is expected to improve how users interact with their smart devices significantly. Home devices should better execute complex commands with multiple actions, such as dimming some lights while leaving others on.

However, the update will also introduce ‘Gemini Live’ to compatible devices, a feature allowing for natural, back-and-forth conversations with the AI chatbot.

The Gemini for Google Home upgrade will initially be rolled out on an early access basis. It will be available in free and paid tiers, suggesting that some more advanced features may be locked behind a subscription.

The update is anticipated to make Google Home and Nest devices more reliable and to handle complex requests easily.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Quantum and supercomputing converge in IBM-AMD initiative

IBM has announced plans to develop next-generation computing architectures by integrating quantum computers with high-performance computing, a concept it calls quantum-centric supercomputing.

The company is working with AMD to build scalable, open-source platforms that combine IBM’s quantum expertise with AMD’s strength in HPC and AI accelerators. The aim is to move beyond the limits of traditional computing and explore solutions to problems that classical systems cannot address alone.

Quantum computing uses qubits governed by quantum mechanics, offering a far richer computational space than binary bits. In a hybrid model, quantum machines could simulate atoms and molecules, while supercomputers powered by CPUs, GPUs, and AI manage large-scale data analysis.

Arvind Krishna, IBM’s CEO, said the approach represents a new way of simulating the natural world. AMD’s Lisa Su described high-performance computing as foundational to tackling global challenges, noting the partnership could accelerate discovery and innovation.

An initial demonstration is planned for later this year, showing IBM quantum computers working with AMD technologies. Both companies say open-source ecosystems like Qiskit will be crucial to building new algorithms and advancing fault-tolerant quantum systems.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Azure Active Directory flaw exposes sensitive credentials

A critical security flaw in Azure Active Directory has exposed application credentials stored in appsettings.json files, allowing attackers unprecedented access to Microsoft 365 tenants.

By exploiting these credentials, threat actors can masquerade as trusted applications and gain unauthorised entry to sensitive organisational data.

The vulnerability leverages the OAuth 2.0 Client Credentials Flow, enabling attackers to generate valid access tokens.

Once authenticated, they can access Microsoft Graph APIs to enumerate users, groups, and directory roles, especially when applications have been granted excessive permissions such as Directory.Read.All or Mail.Read. Such access permits data harvesting across SharePoint, OneDrive, and Exchange Online.

Attackers can also deploy malicious applications under compromised tenants, escalating privileges from limited read access to complete administrative control.

Additional exposed secrets like storage account keys or database connection strings enable lateral movement, modification of critical data, and the creation of persistent backdoors within cloud infrastructure.

Organisations face profound compliance implications under GDPR, HIPAA, or SOX. The vulnerability emphasises the importance of auditing configuration files, storing credentials securely in solutions like Azure Key Vault, and monitoring authentication patterns to prevent long-term, sophisticated attacks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba shares soar on AI and cloud growth

Alibaba’s Hong Kong shares rose over 15%, their most significant single-day gain since early 2023, following strong AI revenue growth. AI-related sales surged triple digits, and the cloud division grew 26% to 33.4 billion yuan ($4.7 billion), exceeding expectations and driving expansion.

The results underline Alibaba’s transformation from a retail-heavy company into a diversified technology player. Analysts say AI is now a central growth driver, with cloud and AI offerings boosting investor confidence despite price war pressures from JD.com and Meituan.

Alibaba is investing in AI hardware and developing proprietary chips to reduce reliance on foreign semiconductors. The strategy aims to build faster, cheaper, and more secure AI systems for domestic and international markets, including Lazada and AliExpress.

Experts view this calculated self-reliance and strong cloud and AI services as a long-term growth driver.

While retail rivals continue to struggle with profit pressure, Alibaba’s leadership has emphasised AI as a core strategic focus.

CEO Eddie Wu emphasised ambitions in artificial general intelligence, with analysts noting AI could protect Alibaba from price wars and support growth across multiple business areas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple creates Asa chatbot for staff training

Apple is moving forward with its integrated approach to AI by testing an internal chatbot designed for retail training. The company focuses on embedding AI into existing services rather than launching a consumer-facing chatbot like Google’s Gemini or ChatGPT.

The new tool, Asa, is being tested within Apple’s SEED app, which offers training resources for store employees and authorised resellers. Asa is expected to improve learning by allowing staff to ask open-ended questions and receive tailored responses.

Screenshots shared by analyst Aaron Perris show Asa handling queries about device features, comparisons, and use cases. Although still in testing, the chatbot is expected to expand across Apple’s retail network in the coming weeks.

The development occurs amid broader AI tensions, as Elon Musk’s xAI sued Apple and OpenAI for allegedly colluding to limit competition. Apple’s focus on internal AI tools like Asa contrasts with Musk’s legal action, highlighting disputes over AI market dominance and platform integration.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Walmart rolls out AI agents to transform shopping and operations

Walmart has unveiled four AI agents to ease the workloads of shoppers, employees, and suppliers. The tools, revealed at the company’s Retail Rewired event, include Marty for suppliers, Sparky for customers, an Associate Agent for staff, and a Developer Agent.

The retailer is leaning on AI as inflation, tariffs, and policy pressures weigh on consumer spending. Its agents cover payroll, time-off requests, merchandising, and personalised shopping recommendations.

Sparky is set to eventually handle automatic reordering of staples, aiming to simplify everyday restocking for households.

Walmart is also investing in ‘digital twins,’ virtual replicas of stores that allow early detection of operational issues. The company says this technology cut emergency alerts by 30% last year and reduced refrigeration maintenance costs by nearly a fifth.

Machine learning is further being applied to improve delivery-time predictions, helping to boost efficiency and customer satisfaction.

Rival retailers are making similar moves. Amazon reported a surge in generative AI use during its Prime Day sales, while Google Cloud AI has partnered with Lush to cut training costs.

Analysts suggest such tools could reshape the retail experience as companies search for ways to hold margins in a tighter economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Stethoscope with AI identifies heart issues in seconds

A new stethoscope powered by AI could enable doctors to identify three serious heart conditions in just seconds, according to UK researchers.

The device replaces the traditional chest piece with a small sensor that records both electrical signals from the heart and the sound of blood flow, which are then analysed in the cloud by AI trained on large datasets.

The AI tool has shown strong results in trials across more than 200 GP practices, with patients tested using the stethoscope being more than twice as likely to be diagnosed with heart failure within 12 months compared with those assessed through usual care.

It was also 3.45 times more likely to detect atrial fibrillation and almost twice as likely to identify heart valve disease.

Researchers from Imperial College London and Imperial College Healthcare NHS Trust said the technology could help doctors provide treatment at an earlier stage instead of waiting until patients present in hospital with advanced symptoms.

The findings, known as Tricorder, will be presented at the European Society of Cardiology Congress in Madrid.

The project, supported by the National Institute for Health and Care Research, is now preparing for further rollouts in Wales, south London and Sussex. Experts described the innovation as a significant step in updating a medical tool that has remained largely unchanged for over 200 years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How local LLMs are changing AI access

As AI adoption rises, more users explore running large language models (LLMs) locally instead of relying on cloud providers.

Local deployment gives individuals control over data, reduces costs, and avoids limits imposed by AI-as-a-service companies. Users can now experiment with AI on their own hardware thanks to software and hardware capabilities.

Concerns over privacy and data sovereignty are driving interest. Many cloud AI services retain user data for years, even when privacy assurances are offered.

By running models locally, companies and hobbyists can ensure compliance with GDPR and maintain control over sensitive information while leveraging high-performance AI tools.

Hardware considerations like GPU memory and processing power are central to local LLM performance. Quantisation techniques allow models to run efficiently with reduced precision, enabling use on consumer-grade machines or enterprise hardware.

Software frameworks like llama.cpp, Jan, and LM Studio simplify deployment, making local AI accessible to non-engineers and professionals across industries.

Local models are suitable for personalised tasks, learning, coding assistance, and experimentation, although cloud models remain stronger for large-scale enterprise applications.

As tools and model quality improve, running AI on personal devices may become a standard alternative, giving users more control over cost, privacy, and performance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India to host OpenAI’s new Stargate data centre

OpenAI is preparing to build a significant new data centre in India as part of its Stargate AI infrastructure initiative. The move will expand the company’s presence in Asia and strengthen its operations in its second-largest market by user base.

OpenAI has already registered as a legal entity in India and begun assembling a local team.

The company plans to open its first office in New Delhi later this year. Details regarding the exact location and timeline of the proposed data centre remain unclear, though CEO Sam Altman may provide further information during his upcoming visit to India.

The project represents a strategic step to support the company’s growing regional AI ambitions.

OpenAI’s Stargate initiative, announced by US President Donald Trump in January, involves private sector investment of up to $500 billion for AI infrastructure, backed by SoftBank, OpenAI, and Oracle.

The initiative seeks to develop large-scale AI capabilities across major markets worldwide, with the India data centre potentially playing a key role in the efforts.

The expansion highlights OpenAI’s focus on scaling its AI infrastructure while meeting regional demand. The company intends to strengthen operational efficiency, improve service reliability, and support its long-term growth in Asia by establishing local offices and a significant data centre.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!