The Commonwealth Bank of Australia has reversed plans to cut 45 customer service roles following union pressure over the use of AI in its call centres.
The Finance Sector Union argued that CBA was not transparent about call volumes, taking the case to the Workplace Relations Tribunal. Staff reported rising workloads despite claims that the bank’s voice bot reduced calls by 2,000 weekly.
CBA admitted its redundancy assessment was flawed, stating that it had not fully considered the business needs. Impacted employees are being offered the option to remain in their current roles, relocate within the firm, or depart.
The Bank of Australia apologised and pledged to review internal processes. Chief executive Matt Comyn has promoted AI adoption, including a new partnership with OpenAI, but the union called the reversal a ‘massive win’ for workers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.
The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.
The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.
The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.
In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.
Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.
AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.
The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.
The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.
Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.
The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.
They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Cybersecurity researchers have uncovered a new AI browser exploit that allows attackers to manipulate autonomous systems using fake CAPTCHA checks.
The PromptFix method tricks agentic AI models into executing commands embedded in deceptive web elements invisible to the user.
Guardio Labs demonstrated that the Comet AI browser could be misled into adding items to a cart and auto-filling sensitive data.
Comet completed fake purchases without user confirmation in some tests, raising concerns over AI trust chains and phishing exposure.
Attackers can also exploit AI email agents by embedding malicious links, prompting the system to bypass user review and reveal credentials.
ChatGPT’s Agent Mode showed similar vulnerabilities but confined actions to a sandbox, preventing direct exposure to user systems.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.
The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.
Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.
Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A new study reveals that prominent AI models now show a marked preference for AI‑generated content over that created by humans.
Tests involving GPT‑3.5, GPT-4 and Llama 3.1 demonstrated a consistent bias, with models selecting AI‑authored text significantly more often than human‑written equivalents.
Researchers warn this tendency could marginalise human creativity, especially in fields like education, hiring and the arts, where original thought is crucial.
There are concerns that such bias may arise not by accident but by design flaws embedded within the development of these systems.
Policymakers and developers are urged to tackle this bias head‑on to ensure future AI complements rather than replaces human contribution.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google’s upcoming Pixel 10 smartphones are tipped to place AI at the centre of the user experience, with three new features expected to redefine how people use their devices.
While hardware upgrades are anticipated at the Made by Google event, much of the excitement revolves around the AI tools that may debut.
One feature, called Help Me Edit, is designed for Google Photos. Instead of spending time on manual edits, users could describe the change they want, such as altering the colour of a car, and the AI would adjust instantly.
Expanding on the Pixel 9’s generative tools, it promises far greater control and speed.
Another addition, Camera Coach, could offer real-time guidance on photography. Using Google’s Gemini AI, the phone may provide step-by-step advice on framing, lighting, and composition, acting as a digital photography tutor.
Finally, Pixel Sense is rumoured to be a proactive personal assistant that anticipates user needs. Learning patterns from apps such as Gmail and Calendar, it could deliver predictive suggestions and take actions across third-party services, bringing the smartphone closer to a truly adaptive companion.
These features suggest that Google is betting heavily on AI to give the Pixel 10 a competitive edge.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk has taken an unexpected conciliatory turn in his feud with Sam Altman by praising a ChatGPT-5 response, ‘I don’t know’, as more valuable than overconfident answers. Musk described it as ‘a great answer’ from the AI chatbot.
Initially sparked by Musk accusing Apple of favouring ChatGPT in App Store rankings and Altman firing back with claims of manipulation on X, the feud has taken on new dimensions as AI itself seems to weigh in.
At one point, xAI’s Grok chat assistant sided with Altman, while ChatGPT offered a supportive nod to Musk. These chatbot alignments have introduced confusion and irony into a clash already rich with irony.
Musk’s praise of a modest AI response contrasts sharply with the often intense claims of supremacy. It signals a rare acknowledgement of restraint and clarity, even from an avowed critic of OpenAI.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI CEO Sam Altman has warned that the United States may be underestimating China’s rapid advances in AI. He argued that export controls on semiconductors are unlikely to be a reliable long-term solution to the global AI race.
At a press briefing in San Francisco, Altman said the competition cannot be reduced to a simple scoreboard. China can expand inference capacity more quickly, even as Washington tightens restrictions on advanced semiconductor exports.
He expressed doubts about the effectiveness of purely policy-driven approaches. ‘You can export-control one thing, but maybe not the right thing… workarounds exist,’ Altman said. He stressed that chip controls may not keep pace with technological realities.
Meanwhile, Chinese firms are accelerating efforts to replace US suppliers, with Huawei and others building domestic alternatives. Altman suggested this push for self-sufficiency could undermine Washington’s goals, raising questions about America’s strategy in the AI race.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!