Employees embrace AI but face major training and trust gaps

SnapLogic has published new research highlighting how AI adoption reshapes daily work across industries while exposing trust, training, and leadership strategy gaps.

The study finds that 78% of employees already use AI in their roles, with half using autonomous AI agents. Workers interact with AI almost daily and save over three hours per week. However, 94% say they face barriers to practical use, with concerns over data privacy and security topping the list.

Based on a survey of 3,000 US, UK, and German employees, the research finds widespread but uneven AI support. Training is a significant gap, with only 63% receiving company-led education. Many rely on trial and error, and managers are more likely to be trained than non-managers.

Generational and hierarchical differences are also evident. Seventy percent of managers express strong confidence in AI, compared with 43% of non-managers. Half believe they will be managed by AI agents rather than people in the future, and many expect to be handled by AI themselves.

SnapLogic’s CTO, Jeremiah Stone, says the agile enterprise is about easing workloads and sparking creativity, not replacing people. The findings underscore the need for companies to align strategy, training, and trust to realise AI’s potential in the workplace fully.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools reshape how Gen Z approaches buying cars

Gen Z drivers are increasingly turning to AI tools to help them decide which car to buy. A new Motor Ombudsman survey of 1,100 UK drivers finds that over one in four Gen Z drivers would rely on AI guidance when purchasing a vehicle, compared with 12% of Gen X drivers and just 6% of Baby Boomers.

Younger drivers view AI as a neutral and judgment-free resource. Nearly two-thirds say it helps them make better decisions, while over half appreciate the ability to ask unlimited questions. Many see AI as a fast and convenient way to access information during car-buying.

Three-quarters of Gen Z respondents believe AI could help them estimate price ranges, while 60% think it would improve their haggling skills. Around four in ten say it would help them assess affordability and running costs, a sentiment less common among Millennials and Gen Xers.

Confidence levels also vary across generations. About 86% of Gen Z and 87% of Millennials say they would feel more assured if they used AI before making a purchase, compared with 39% of Gen Xers and 40% of Boomers, many of whom remain indifferent to its influence.

Almost half of drivers say they would take AI-generated information at face value. Gen Z is the most trusting, while older generations remain cautious. The Motor Ombudsman urges buyers to treat AI as a complement to trusted research and retailer checks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT introduces new generation of interactive apps

A new generation of interactive apps is arriving in ChatGPT, allowing users to engage with tools like Canva, Spotify, and Booking.com directly through conversation. The apps appear naturally during chats, enabling users to create, learn, and explore within the same interface.

Developers can now build their own ChatGPT apps using the newly launched Apps SDK, released in preview as an open standard based on the Model Context Protocol. The SDK includes documentation, examples, and testing tools, with app submissions and monetisation to follow later this year.

Over 800 million ChatGPT users can now access these apps on Free, Go, Plus and Pro plans, excluding EU regions for the moment. Early partners include Booking.com, Coursera, Canva, Figma, Expedia, Spotify, and Zillow, with more to follow later in the year.

Apps respond to natural language and integrate interactive features such as maps, playlists, and slides directly in chat. ChatGPT can even suggest relevant apps during conversations- for instance, showing Zillow listings when discussing home purchases or prompting Spotify for a party playlist.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils CodeMender, an AI agent that repairs code vulnerabilities

Google researchers have unveiled CodeMender, an AI-powered agent designed to automatically detect and fix software vulnerabilities.

The tool aims to improve code security by generating and applying patches that address critical flaws, allowing developers to focus on building reliable software instead of manually locating and repairing weaknesses.

Built on the Gemini Deep Think models, CodeMender operates autonomously, identifying vulnerabilities, reasoning about the underlying code, and validating patches to ensure they are correct and do not introduce regressions.

Over the past six months, it has contributed 72 security fixes to open source projects, including those with millions of lines of code.

The system combines advanced program analysis with multi-agent collaboration to strengthen its decision-making. It employs techniques such as static and dynamic analysis, fuzzing and differential testing to trace the root causes of vulnerabilities.

Each proposed fix undergoes rigorous validation before being reviewed by human developers to guarantee quality and compliance with coding standards.

According to Google, CodeMender’s dual approach (reactively patching new flaws and proactively rewriting code to eliminate entire vulnerability classes) represents a major step forward in AI-driven cybersecurity.

The company says the tool’s success demonstrates how AI can transform the maintenance and protection of modern software systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Policy hackathon shapes OpenAI proposals ahead of EU AI strategy

OpenAI has published 20 policy proposals to speed up AI adoption across the EU. Released shortly before the European Commission’s Apply AI Strategy, the report outlines practical steps for member states, businesses, and the public sector to bridge the gap between ambition and deployment.

The proposals originate from Hacktivate AI, a Brussels hackathon with 65 participants from EU institutions, governments, industry, and academia. They focus on workforce retraining, SME support, regulatory harmonisation, and public sector collaboration, highlighting OpenAI’s growing policy role in Europe.

Key ideas include Individual AI Learning Accounts to support workers, an AI Champions Network to mobilise SMEs, and a European GovAI Hub to share resources with public institutions. OpenAI’s Martin Signoux said the goal was to bridge the divide between strategy and action.

Europe already represents a major market for OpenAI tools, with widespread use among developers and enterprises, including Sanofi, Parloa, and Pigment. Yet adoption remains uneven, with IT and finance leading, manufacturing catching up, and other sectors lagging behind, exposing a widening digital divide.

The European Commission is expected to unveil its Apply AI Strategy within days. OpenAI’s proposals act as a direct contribution to the policy debate, complementing previous initiatives such as its EU Economic Blueprint and partnerships with governments in Germany and Greece.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU digital laws simplified by CEPS Task Force to boost innovation

The Centre for European Policy Studies (CEPS) Task Force, titled ‘Next Steps for EU Law and Regulation for the Digital World’, aims to refine and simplify the EU’s digital rulebook.

This rulebook now covers key legislation, including the Digital Markets Act (DMA), Digital Services Act (DSA), GDPR, Data Act, AI Act, Data Governance Act (DGA), and Cyber Resilience Act (CRA).

While these laws position Europe as a global leader in digital regulation, they also create complexity, overlaps, and legal uncertainty.

The Task Force focuses on enhancing coherence, efficiency, and consistency across digital acts while maintaining strong protections for consumers and businesses.

The CEPS Task Force emphasises targeted reforms to reduce compliance burdens, especially for SMEs, and strengthen safeguards.

It also promotes procedural improvements, including robust impact assessments, independent ex-post evaluations, and the adoption of RegTech solutions to streamline compliance and make regulation more adaptive.

Between November 2025 and January 2026, the Task Force will hold four workshops addressing: alignment of the DMA with competition law, fine-tuning the DSA, improving data governance, enhancing GDPR trust, and ensuring AI Act coherence.

The findings will be published in a Final Report in March 2026, outlining a simpler, more agile EU digital regulatory framework that fosters innovation, reduces regulatory burdens, and upholds Europe’s values.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deloitte’s AI blunder: A costly lesson in consultancy business

Deloitte has agreed to refund the Australian government the full amount of $440,000 after acknowledging major errors in a consultancy report concerning welfare mutual obligations. These errors were the result of using AI tools, which led to fabricated content, including false quotes related to the Federal Court case on the Robodebt scheme and fictitious academic references.

That incident underscores the challenges of deploying AI in crucial government consultancy projects without sufficient human oversight, raising questions about the credibility of government policy decisions influenced by such flawed reports.

In response to these errors, Deloitte has publicly accepted full responsibility and committed to refunding the government. The firm is re-evaluating its internal quality assurance procedures and has emphasised the necessity of rigorous human review to maintain the integrity of consultancy projects that utilise AI.

The situation has prompted the government of Australia to reassess its reliance on AI-generated content for policy analysis, and it is currently investigating the oversight mechanisms to prevent future occurrences. The inaccuracies in the report had previously swayed discussions on welfare compliance, thereby shaking public trust in the consultancy services employed for critical government policymaking.

The broader consultancy industry is feeling the ripple effects, as this incident highlights the reputational and financial dangers of unchecked AI outputs. As AI becomes more prevalent for its efficiency, this case serves as a stark reminder of its limitations, particularly in sensitive government matters.

Industry pressure is growing for firms to enhance their quality control measures, disclose the level of AI involvement in their reports, and ensure that technology use does not compromise information quality. The Deloitte case adds to ongoing discussions about the ethical and practical integration of AI into professional services, reinforcing the imperative for human oversight and editorial controls even as AI technology progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI transforms retinal scans into predictive health tools

AI and machine learning are transforming ophthalmology, making retinal scans powerful tools for predicting health risks. Retinal images offer a non-invasive view of blood vessels and nerve fibres, showing risks for high blood pressure, kidney, heart, and stroke-related issues.

With lifestyle-related illnesses on the rise, early detection through eye scans has become increasingly important.

Technologies like fundus photography and optical coherence tomography-angiography (OCT-A) now enable detailed imaging of retinal vessels. Researchers use AI to analyse these images, identifying microvascular biomarkers linked to systemic diseases.

Novel approaches such as ‘oculomics’ allow clinicians to predict surgical outcomes for macular hole treatment and assess patients’ risk levels for multiple conditions in one scan.

AI is also applied to diabetes screening, particularly in countries with significant at-risk populations. Deep learning frameworks can estimate average blood sugar levels (HbA1c) from retinal images, offering a non-invasive, cost-effective alternative to blood tests.

Despite its promise, AI in ophthalmology faces challenges. Limited and non-diverse datasets can reduce accuracy, and the ‘black box’ nature of AI decision-making can make doctors hesitant.

Collaborative efforts to share anonymised patient data and develop more transparent AI models are helping to overcome these hurdles, paving the way for safer and more reliable applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Anthropic’s Claude to power Deloitte’s new enterprise AI expansion

Deloitte entered a new enterprise AI partnership with Anthropic shortly after refunding the Australian government for a report that included inaccurate AI-generated information.

The A$439,000 (US$290,618) contract was intended for an independent review but contained fabricated citations to non-existent academic sources. Deloitte has since repaid the final instalment, and the government of Australia has released a corrected version of the report.

Despite the controversy, Deloitte is expanding its use of AI by integrating Anthropic’s Claude chatbot across its global workforce of nearly half a million employees.

A collaboration will focus on developing AI-driven tools for compliance, automation and data analysis, especially in highly regulated industries such as finance and healthcare.

The companies also plan to design AI agent personas tailored to Deloitte’s various departments to enhance productivity and decision-making. Financial terms of the agreement were not disclosed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gamers report widespread disconnections across multiple services

Several major gaming and online platforms have reportedly faced simultaneous disruptions across multiple devices and regions. Platforms like Steam and Riot Games experienced connection issues, blocking access to major titles such as Counter-Strike, Dota 2, Valorant, and League of Legends.

Some users reported issues with PlayStation Network, Epic Games, Hulu, AWS, and other services.

Experts suggest the outages may be linked to a possible DDoS attack from the Aisuru botnet. While official confirmations remain limited, reports indicate unusually high traffic, with one source claiming bandwidth levels near 30 terabits per second.

Similar activity from Aisuru has been noted in incidents dating back to 2024, targeting a range of internet-connected devices.

The botnet is thought to exploit vulnerabilities in routers, cameras, and other connected devices, potentially controlling hundreds of thousands of nodes. Researchers say the attacks are widespread across countries and industries, though their full scale and purpose remain uncertain.

Further investigations are ongoing, and platforms continue to monitor and respond to potential threats. Users are advised to remain aware of service updates and exercise caution when accessing online networks during periods of unusual activity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!