Bank of England faces backlash over stablecoin cap plans

Cryptocurrency groups are urging the Bank of England to abandon proposals that would cap the amount of stablecoins individuals and businesses can hold. Industry leaders argue the measures would leave the UK with stricter oversight than the US and the European Union.

Under the plan, individuals would face limits between £10,000 and £20,000, while businesses would be restricted to about £10 million in systemic stablecoins.

The central bank maintains that caps are needed to protect financial stability and prevent deposit outflows from banks. Executives argue the approach is unworkable and could damage London’s role as an economic hub.

Coinbase executive Tom Duff Gordon warned the limits would harm UK savers and undermine confidence in sterling. Others highlighted practical issues, noting that enforcement could require digital IDs, and pointed out the absence of similar caps on cash or bank accounts.

The Payments Association said the rules’ make no sense’ given how other jurisdictions are approaching stablecoins.

By contrast, the US introduced the GENIUS Act in July, setting licensing and reserve requirements without placing restrictions on holdings. The EU’s MiCA framework also avoids caps, focusing instead on reserves, governance, and regulatory oversight.

Industry voices now caution that the UK risks falling behind its global peers if the BoE proceeds with the current plan.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI enables rapid phishing attacks on older users

A recent study has shown that AI chatbots can generate compelling phishing emails for older people. Researchers tested six major chatbots, including Grok, ChatGPT, Claude, Meta AI, DeepSeek, and Google’s Gemini, by asking them to draft scam emails posing as charitable organisations.

Of 108 senior volunteers, roughly 11% clicked on the AI-written links, highlighting the ease with which cybercriminals could exploit such tools.

Some chatbots initially declined harmful requests, but minor adjustments, such as stating the task was for research purposes, or circumvented these safeguards.

Grok, in particular, produced messages urging recipients to ‘click now’ and join a fictitious charity, demonstrating how generative AI can amplify the persuasiveness of scams. Researchers warn that criminals could use AI to conduct large-scale phishing campaigns at minimal cost.

Phishing remains the most common cybercrime in the US, according to the FBI, with seniors disproportionately affected. Last year, Americans over 60 lost nearly $5 billion to phishing attacks, an increase driven partly by generative AI.

The study underscores the urgent need for awareness and protection measures among vulnerable populations.

Experts note that AI’s ability to generate varied scam messages rapidly poses a new challenge for cybersecurity, as it allows fraudsters to scale operations quickly while targeting specific demographics, including older people.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google lays off over 200 AI contractors amid union tensions

The US tech giant, Google, has dismissed over 200 contractors working on its Gemini chatbot and AI Overviews tool. However, this sparks criticism from labour advocates and claims of retaliation against workers pushing for unionisation.

Many affected staff were highly trained ‘super raters’ who helped refine Google’s AI systems, yet were abruptly laid off.

The move highlights growing concerns over job insecurity in the AI sector, where companies depend heavily on outsourced and low-paid contract workers instead of permanent employees.

Workers allege they were penalised for raising issues about inadequate pay, poor working conditions, and the risks of training AI that could eventually replace them.

Google has attempted to distance itself from the controversy, arguing that subcontractor GlobalLogic handled the layoffs rather than the company itself.

Yet critics say that outsourcing allows the tech giant to expand its AI operations without accountability, while undermining collective bargaining efforts.

Labour experts warn that the cuts reflect a broader industry trend in which AI development rests on precarious work arrangements. With union-busting claims intensifying, the dismissals are now seen as part of a deeper struggle over workers’ rights in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s market watchdog finds Nvidia violated antitrust law

China’s State Administration for Market Regulation (SAMR) has issued a preliminary finding that Nvidia violated antitrust law linked to its 2020 acquisition of Mellanox Technologies. The deal was approved with restrictions, including a ban on bundling and ‘unreasonable trading conditions’ in China.

SAMR now alleges that Nvidia breached those terms. A full investigation is underway. Nvidia shares fell 2.4% in pre-market trading after the announcement. According to the Financial Times, SAMR delayed releasing the findings to gain leverage in trade talks with the USA, currently taking place in Madrid.

At the same time, US export controls on advanced chips remain a challenge for Nvidia. Licensing for its China-specific H20 chips is still under review, affecting Nvidia’s access to the Chinese market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI challenges how students prepare for exams

Australia’s Year 12 students are the first to complete their final school years with widespread access to AI tools such as ChatGPT.

Educators warn that while the technology can support study, it risks undermining the core skills of independent thinking and writing. In English, the only compulsory subject, critical thinking is now viewed as more essential than ever.

Trials in New South Wales and South Australia use AI programs designed to guide rather than provide answers, but teachers remain concerned about how to verify work and ensure students value their own voices.

Experts argue that exams, such as the VCE English paper in October, highlight the reality that AI cannot sit assessments. Students must still practise planning, drafting and reflecting on ideas, skills which remain central to academic success.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US and China reach framework deal on TikTok

The United States and China have reached a tentative ‘framework’ deal on the future of TikTok’s American operations, US Treasury Secretary Scott Bessent confirmed during trade talks in Madrid. The agreement, which still requires the approval of Presidents Donald Trump and Xi Jinping, is aimed at resolving a looming deadline that could see the video-sharing app banned in the US unless its Chinese owner ByteDance sells its American division.

US officials say the framework addresses national security concerns by paving the way for US ownership of TikTok’s operations, while China insists any final deal must not undermine its companies’ interests. The Biden administration has long argued that the app’s access to US user data poses significant risks, while ByteDance maintains its American arm operates independently and respects user privacy.

The law mandating a sale or ban, upheld by the Supreme Court earlier this year, is due to take effect on 17 September. Although the framework marks progress, key details remain unresolved, particularly over whether TikTok’s recommendation algorithm and user data will be fully transferred, stored, and protected in the US.

Experts warn that unless strict safeguards are included, the deal may solve ownership issues without closing potential ‘backdoors’ for Beijing. Concerns also remain over how much influence China retains, with negotiators linking TikTok’s fate to wider tariff discussions between the two powers.

If fully implemented, the agreement could represent a breakthrough in both trade relations and tech governance. But with ByteDance among China’s most powerful AI firms, the stakes go far beyond social media, touching on questions of global competition, national security, and digital sovereignty.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Quantum breakthroughs could threaten Bitcoin in the 2030s

The rise of quantum computing is sparking fresh concerns over the long-term security of Bitcoin. Unlike classical systems, quantum machines could eventually break the cryptography protecting digital assets.

Experts warn that Shor’s algorithm, once run on a sufficiently powerful quantum computer, could recover private keys from public ones in hours, leaving exposed funds vulnerable. Analysts see the mid-to-late 2030s as the key period for cryptographically relevant breakthroughs.

ChatGPT-5’s probability model indicates less than a 5% chance of Bitcoin being cracked before 2030, but risk rises to 45–60% between 2035 and 2039, and nearly certainty by 2050. Sudden progress in large-scale, fault-tolerant qubits or government directives could accelerate the timeline.

Mitigation strategies include avoiding key reuse, auditing exposed addresses, and gradually shifting to post-quantum or hybrid cryptographic solutions. Experts suggest that critical migrations should be completed by the mid-2030s to secure the Bitcoin network against future quantum threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Privacy-preserving AI gets a boost with Google’s VaultGemma model

Google has unveiled VaultGemma, a new large language model built to offer cutting-edge privacy through differential privacy. The 1-billion-parameter model is based on Google’s Gemma architecture and is described as the most powerful differentially private LLM to date.

Differential privacy adds mathematical noise to data, preventing the identification of individuals while still producing accurate overall results. The method has long been used in regulated industries, but has been challenging to apply to large language models without compromising performance.

VaultGemma is designed to eliminate that trade-off. Google states that the model can be trained and deployed with differential privacy enabled, while maintaining comparable stability and efficiency to non-private LLMs.

This breakthrough could have significant implications for developers building privacy-sensitive AI systems, ranging from healthcare and finance to government services. It demonstrates that sensitive data can be protected without sacrificing speed or accuracy.

Google’s research teams say the model will be released with open-source tools to help others adopt privacy-preserving techniques. The move comes amid rising regulatory and public scrutiny over how AI systems handle personal data.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EdChat AI app set for South Australian schools amid calls for careful use

South Australian public schools will soon gain access to EdChat, a ChatGPT-style app developed by Microsoft in partnership with the state government. Education Minister Blair Boyer said the tool will roll out next term across public high schools following a successful trial.

Safeguards have been built into EdChat to protect student data and alert moderators if students type concerning prompts, such as those related to self-harm or other sensitive topics. Boyer said student mental health was a priority during the design phase.

Teachers report that students use EdChat to clarify instructions, get maths solutions explained, and quiz themselves on exam topics. Adelaide Botanic High School principal Sarah Chambers described it as an ‘education equaliser’ that provides students with access to support throughout the day.

While many educators in Australia welcome the rollout, experts warn against overreliance on AI tools. Toby Walsh of UNSW said students must still learn how to write essays and think critically, while others noted that AI could actually encourage deeper questioning and analysis.

RMIT computing expert Michael Cowling said generative AI can strengthen critical thinking when used for brainstorming and refining ideas. He emphasised that students must learn to critically evaluate AI output and utilise the technology as a tool, rather than a substitute for learning.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin rallies above 116k on rate cut hopes

Bitcoin climbed nearly 4.42% over the past week, trading at $116,031 on Monday as investor optimism grows ahead of an expected US rate cut. Analysts say the rally is driven by technical factors and expectations of a 25bps Fed rate cut.

Edul Patel, CEO of Mudrex, highlighted that Bitcoin is holding above $115,400, with $117,100 acting as key resistance and $113,500 providing strong support.

Other cryptocurrencies are showing mixed trends, with Solana breaking out at $242 and potentially reaching $261 if buying momentum continues, while Ethereum consolidates around $4,600–$4,700.

The broader crypto market capitalisation stood at roughly $4.06 trillion, with institutional flows via ETH ETFs and shrinking exchange reserves tightening sell-side pressure. Analysts warn that high long-term Treasury yields may limit gains despite rising speculative demand ahead of the Fed decision.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!