US tech stocks have stumbled after a sharp rally, with investors increasingly cautious over AI-linked valuations and shifting market conditions. The S&P 500 tech sector has dropped around 2.5% this week, while the Nasdaq has slipped 2%, led by losses in Nvidia and Palantir.
The fall follows a 50% surge in tech shares since April, far outpacing the broader market and pushing valuations to year-highs. Concerns are growing that investor enthusiasm around AI has become overheated, with some funds reducing their exposure ahead of expected interest rate guidance.
US market watchers are now focused on Federal Reserve Chair Jerome Powell’s speech at Jackson Hole, which could signal if rate cuts are on the horizon. Tech stocks, already heavily weighted in many portfolios, are particularly vulnerable to higher rates due to their stretched price-to-earnings ratios.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Citi has expanded its digital client platform, CitiDirect Commercial Banking, with new AI capabilities to improve customer service and security.
The platform now supports over half of Citi’s global commercial banking client base and handles around 2.3 million sessions.
AI features assist in fraud detection, automate customer queries, and provide real-time onboarding updates and guidance.
KYC renewals have been simplified through automated alerts and pre-filled forms, cutting effort and processing time for clients.
Live in markets including the UK, US, India, and others, the platform has received positive feedback from over 10,000 users. Citi says the enhancements are part of a broader effort to make mid-sized corporate banking faster, more innovative, and more efficient.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Federal Reserve Vice Chair for Supervision Michelle Bowman has warned that banks must embrace blockchain technology or risk fading into irrelevance. At the Wyoming Blockchain Symposium on 19 August, she urged banks and regulators to drop caution and embrace innovation.
Bowman highlighted tokenisation as one of the most immediate applications, enabling assets to be transferred digitally without intermediaries or physical movement.
She explained that tokenised systems could cut operational delays, reduce risks, and expand access across large and smaller banks. Regulatory alignment, she added, could accelerate tokenisation from pilots to mainstream adoption.
Fraud prevention was also a key point of her remarks. Bowman said financial institutions face growing threats from scams and identity theft, but argued blockchain could help reduce fraud.
She called for regulators to ensure frameworks support adoption rather than hinder it, framing the technology as a chance for collaboration between the industry and the Fed.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI’s rollout of GPT-5 has faced criticism from users attached to older models, who say the new version lacks the character of its predecessors.
GPT-5 was designed as an all-in-one model, featuring a lightweight version for rapid responses and a reasoning version for complex tasks. A routing system determines which option to use, although users can manually select from several alternatives.
Modes include Auto, Fast, Thinking, Thinking mini, and Pro, with the last available to Pro subscribers for $200 monthly. Standard paid users can still access GPT-4o, GPT-4.1, 4o-mini, and even 3o through additional settings.
Chief executive Sam Altman has said the long-term goal is to give users more control over ChatGPT’s personality, making customisation a solution to concerns about style. He promised ample notice before permanently retiring older models.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new MIT study has found that 95% of corporate AI projects fail to deliver returns, mainly due to difficulties integrating them with existing workflows.
The report, ‘The GenAI Divide: State of AI in Business 2025’, examined 300 deployments and interviewed 350 employees. Only 5% of projects generated value, typically when focused on solving a single, clearly defined problem.
Executives often blamed model performance, but researchers pointed to a workforce ‘learning gap’ as the bigger barrier. Many projects faltered because staff were unprepared to adapt processes effectively.
More than half of GenAI budgets were allocated to sales and marketing, yet the most substantial returns came from automating back-office tasks, such as reducing agency costs and streamlining roles.
The study also found that tools purchased from specialised vendors were nearly twice as successful as in-house systems, with success rates of 67% compared to 33%.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
The Commonwealth Bank of Australia has reversed plans to cut 45 customer service roles following union pressure over the use of AI in its call centres.
The Finance Sector Union argued that CBA was not transparent about call volumes, taking the case to the Workplace Relations Tribunal. Staff reported rising workloads despite claims that the bank’s voice bot reduced calls by 2,000 weekly.
CBA admitted its redundancy assessment was flawed, stating that it had not fully considered the business needs. Impacted employees are being offered the option to remain in their current roles, relocate within the firm, or depart.
The Bank of Australia apologised and pledged to review internal processes. Chief executive Matt Comyn has promoted AI adoption, including a new partnership with OpenAI, but the union called the reversal a ‘massive win’ for workers.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Private conversations with xAI’s chatbot Grok have been exposed online, raising serious concerns over user privacy and AI safety. Forbes found that Grok’s ‘share’ button created public URLs, later indexed by Google and other search engines.
The leaked content is troubling, ranging from questions on hacking crypto wallets to instructions on drug production and even violent plots. Although xAI bans harmful use, some users still received dangerous responses, which are now publicly accessible online.
The exposure occurred because search engines automatically indexed the shareable links, a flaw echoing previous issues with other AI platforms, including OpenAI’s ChatGPT. Designed for convenience, the feature exposed sensitive chats, damaging trust in xAI’s privacy promises.
The incident pressures AI developers to integrate stronger privacy safeguards, such as blocking the indexing of shared content and enforcing privacy-by-design principles. Users may hesitate to use chatbots without fixes, fearing their data could reappear online.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.
In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.
Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.
AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Researchers at the University of California, Davis, have revealed that generative AI browser assistants may be harvesting sensitive data from users without their knowledge or consent.
The study, led by the UC Davis Data Privacy Lab, tested popular browser extensions powered by AI and discovered that many collect personal details ranging from search history and email contents to financial records.
The findings highlight a significant gap in transparency. While these tools often market themselves as productivity boosters or safe alternatives to traditional assistants, many lack clear disclosures about the data they extract.
Researchers sometimes observed personal information being transmitted to third-party servers without encryption.
Privacy advocates argue that the lack of accountability puts users at significant risk, particularly given the rising adoption of AI assistants for work, education and healthcare. They warn that sensitive data could be exploited for targeted advertising, profiling, or cybercrime.
The UC Davis team has called for stricter regulatory oversight, improved data governance, and mandatory safeguards to protect users from hidden surveillance.
They argue that stronger frameworks are needed to balance innovation with fundamental rights as generative AI tools continue to integrate into everyday digital infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!