MIT highlights divide in business AI project success

A new MIT study has found that 95% of corporate AI projects fail to deliver returns, mainly due to difficulties integrating them with existing workflows.

The report, ‘The GenAI Divide: State of AI in Business 2025’, examined 300 deployments and interviewed 350 employees. Only 5% of projects generated value, typically when focused on solving a single, clearly defined problem.

Executives often blamed model performance, but researchers pointed to a workforce ‘learning gap’ as the bigger barrier. Many projects faltered because staff were unprepared to adapt processes effectively.

More than half of GenAI budgets were allocated to sales and marketing, yet the most substantial returns came from automating back-office tasks, such as reducing agency costs and streamlining roles.

The study also found that tools purchased from specialised vendors were nearly twice as successful as in-house systems, with success rates of 67% compared to 33%.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud boosts AI security with agentic defence tools

Google Cloud has unveiled a suite of security enhancements at its Security Summit 2025, focusing on protecting AI innovations and empowering cybersecurity teams with AI-driven defence tools.

VP and GM Jon Ramsey highlighted the growing need for specialised safeguards as enterprises deploy AI agents across complex environments.

Central to the announcements is the concept of an ‘agentic security operations centre,’ where AI agents coordinate actions to achieve shared security objectives. It represents a shift from reactive security approaches to proactive, agent-supported strategies.

Google’s platform integrates automated discovery, threat detection, and response mechanisms to streamline security operations and cover gaps in existing infrastructures.

Key innovations include extended protections for AI agents through Model Armour, covering Agentspace prompts and responses to mitigate prompt injection attacks, jailbreaking, and data leakage.

The Alert Investigation agent, available in preview, automates enrichment and analysis of security events while offering actionable recommendations, reducing manual effort and accelerating response times.

Integrating Mandiant threat intelligence feeds and Gemini AI strengthens detection and incident response across agent environments.

Additional tools, such as SecOps Labs and native SOAR dashboards, provide organisations with early access to AI-powered threat detection experiments and comprehensive security visualisation capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

CBA reverses AI-driven job cuts after union pressure

The Commonwealth Bank of Australia has reversed plans to cut 45 customer service roles following union pressure over the use of AI in its call centres.

The Finance Sector Union argued that CBA was not transparent about call volumes, taking the case to the Workplace Relations Tribunal. Staff reported rising workloads despite claims that the bank’s voice bot reduced calls by 2,000 weekly.

CBA admitted its redundancy assessment was flawed, stating that it had not fully considered the business needs. Impacted employees are being offered the option to remain in their current roles, relocate within the firm, or depart.

The Bank of Australia apologised and pledged to review internal processes. Chief executive Matt Comyn has promoted AI adoption, including a new partnership with OpenAI, but the union called the reversal a ‘massive win’ for workers.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google to replace Assistant with Gemini in smart home devices

Google has announced that Gemini will soon power its smart home platform, replacing Google Assistant on existing Nest speakers and displays from October. The feature will launch initially as an early preview.

Gemini for Home promises more natural conversations and can manage complex household tasks, including controlling smart devices, creating calendars, and handling lists or timers through natural language commands. It will also support Gemini Live for ongoing dialogue.

Google says the upgrade is designed to serve all household members and visitors, offering hands-free help and integration with streaming platforms. The move signals a renewed focus on Google Home, a product line that has been largely overlooked in recent years.

The announcement hints at potential new hardware, given that Google’s last Nest Hub was released in 2021 and the Nest Audio speaker dates back to 2020.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek launches upgraded AI system with stronger agent capability

DeepSeek has released a minor upgrade, V3.1, yet conspicuously omitted any R1 label from its chatbot, leading to speculation over the status of the promised R2 model.

The V3.1 version includes improvements such as an expanded 128K token context window for holding more information per interaction, but lacks major innovation beyond that. Observers note that the absence of R1 suggests that DeepSeek may be reworking its roadmap or shifting focus.

Industry watchers point to the gap this update left, especially in light of delays reported for the R2 model, which has faced technical setbacks due to hardware issues and training challenges with domestic chips. Competitors are now gaining ground as a result.

With no official statement from DeepSeek and a quieter-than-usual announcement, delivered only to a WeChat user group, analysts are questioning whether the company is rethinking its product sequencing or concealing delays in rolling out the next-generation R2 reasoning model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta freezes hiring as AI costs spark investor concern

Meta has frozen hiring in its AI division, halting a spree that had drawn top researchers with lucrative offers. The company described the pause as basic organisational planning, aimed at building a more stable structure for its superintelligence ambitions.

The freeze, first reported by the Wall Street Journal, began last week and prevents employees in the unit from transferring to other teams. Its duration has not been communicated, and Meta declined to comment on the number of hires already made.

The decision follows growing tensions inside the newly created Superintelligence Labs, where long-serving researchers have voiced concerns over disparities in pay and recognition compared with recruits.

Alexandr Wang, who leads the division, recently told staff that superintelligence is approaching and that significant changes are necessary to prepare. His email outlined Meta’s most significant reorganisation of its AI efforts.

The pause also comes amid investor scrutiny, as analysts warn that heavy reliance on stock-based compensation to attract talent could fuel innovation or dilute shareholder value without precise results.

Despite these concerns, Meta’s stock has risen by about 28% since the start of the year, reflecting continued investor confidence in the company’s long-term prospects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok chatbot leaks spark major AI privacy concerns

Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.

The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.

The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.

The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rethinking ‘soft skills’ as core drivers of transformation

Communication, empathy, and judgment were dismissed for years as ‘soft skills‘, sidelined while technical expertise dominated training and promotion. A new perspective argues that these human competencies are fundamental to resilience and transformation.

Researchers and practitioners emphasise that AI can expedite decision-making but cannot replace human judgment, trust, or narrative. Failures in leadership often stem from a lack of human capacity rather than technical gaps.

Redefining skills like decision-making, adaptability, and emotional intelligence as measurable behaviours helps organisations train and evaluate leaders effectively. Embedding these human disciplines ensures transformation holds under pressure and uncertainty.

Career and cultures are strengthened when leaders are assessed on their ability to build trust, resolve conflicts, and influence through storytelling. Without funding the human core alongside technical skills, strategies collapse, and talent disengages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft executive Mustafa Suleyman highlights risks of seemingly conscious AI

Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.

In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.

Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.

AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study warns of AI browser assistants collecting sensitive data

Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.

The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.

The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.

Researchers sometimes observed personal information being transmitted to third-party servers without encryption.

Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.

The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.

They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!