A new MIT study has found that 95% of corporate AI projects fail to deliver returns, mainly due to difficulties integrating them with existing workflows.
The report, ‘The GenAI Divide: State of AI in Business 2025’, examined 300 deployments and interviewed 350 employees. Only 5% of projects generated value, typically when focused on solving a single, clearly defined problem.
Executives often blamed model performance, but researchers pointed to a workforce ‘learning gap’ as the bigger barrier. Many projects faltered because staff were unprepared to adapt processes effectively.
More than half of GenAI budgets were allocated to sales and marketing, yet the most substantial returns came from automating back-office tasks, such as reducing agency costs and streamlining roles.
The study also found that tools purchased from specialised vendors were nearly twice as successful as in-house systems, with success rates of 67% compared to 33%.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Commonwealth Bank of Australia has reversed plans to cut 45 customer service roles following union pressure over the use of AI in its call centres.
The Finance Sector Union argued that CBA was not transparent about call volumes, taking the case to the Workplace Relations Tribunal. Staff reported rising workloads despite claims that the bank’s voice bot reduced calls by 2,000 weekly.
CBA admitted its redundancy assessment was flawed, stating that it had not fully considered the business needs. Impacted employees are being offered the option to remain in their current roles, relocate within the firm, or depart.
The Bank of Australia apologised and pledged to review internal processes. Chief executive Matt Comyn has promoted AI adoption, including a new partnership with OpenAI, but the union called the reversal a ‘massive win’ for workers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.
The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.
The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.
The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.
In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.
Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.
AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.
The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.
The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.
Researchers sometimes observed personal information being transmitted to third-party servers without encryption.
Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.
The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.
They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.
The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.
Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.
Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new study reveals that prominent AI models now show a marked preference for AI‑generated content over that created by humans.
Tests involving GPT‑3.5, GPT-4 and Llama 3.1 demonstrated a consistent bias, with models selecting AI‑authored text significantly more often than human‑written equivalents.
Researchers warn this tendency could marginalise human creativity, especially in fields like education, hiring and the arts, where original thought is crucial.
There are concerns that such bias may arise not by accident but by design flaws embedded within the development of these systems.
Policymakers and developers are urged to tackle this bias head‑on to ensure future AI complements rather than replaces human contribution.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google’s upcoming Pixel 10 smartphones are tipped to place AI at the centre of the user experience, with three new features expected to redefine how people use their devices.
While hardware upgrades are anticipated at the Made by Google event, much of the excitement revolves around the AI tools that may debut.
One feature, called Help Me Edit, is designed for Google Photos. Instead of spending time on manual edits, users could describe the change they want, such as altering the colour of a car, and the AI would adjust instantly.
Expanding on the Pixel 9’s generative tools, it promises far greater control and speed.
Another addition, Camera Coach, could offer real-time guidance on photography. Using Google’s Gemini AI, the phone may provide step-by-step advice on framing, lighting, and composition, acting as a digital photography tutor.
Finally, Pixel Sense is rumoured to be a proactive personal assistant that anticipates user needs. Learning patterns from apps such as Gmail and Calendar, it could deliver predictive suggestions and take actions across third-party services, bringing the smartphone closer to a truly adaptive companion.
These features suggest that Google is betting heavily on AI to give the Pixel 10 a competitive edge.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk has taken an unexpected conciliatory turn in his feud with Sam Altman by praising a ChatGPT-5 response, ‘I don’t know’, as more valuable than overconfident answers. Musk described it as ‘a great answer’ from the AI chatbot.
At one point, xAI’s Grok chat assistant sided with Altman, while ChatGPT offered a supportive nod to Musk. These chatbot alignments have introduced confusion and irony into a clash already rich with irony.
Musk’s praise of a modest AI response contrasts sharply with the often intense claims of supremacy. It signals a rare acknowledgement of restraint and clarity, even from an avowed critic of OpenAI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!