The US Securities and Exchange Commission (SEC) and the Commodity Futures Trading Commission (CFTC) have announced a joint effort to clarify spot cryptocurrency trading. Regulators confirmed that US and foreign exchanges can list spot crypto products- leveraged and margin ones.
The guidance follows the President’s Working Group on Digital Asset Markets recommendations, which called for rules that keep blockchain innovation within the country.
Regulators said they are ready to review filings, address custody and clearing, and ensure spot markets meet transparency and investor protection standards.
Under the new approach, major venues such as the New York Stock Exchange, Nasdaq, CME Group and Cboe Global Markets could seek to list spot crypto assets. Foreign boards of trade recognised by the CFTC may also be eligible.
The move highlights a policy shift under President Donald Trump’s administration, with Congress and the White House pressing for greater regulatory clarity.
In July, the House of Representatives passed the CLARITY Act, a bill on crypto market structure now before the Senate. The moves and the regulators’ statement mark a key step in aligning US digital assets with established financial rules.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Kazakhstan has announced its support for China’s proposal to establish a Global Organisation for Cooperation in AI, highlighting its ambition to strengthen digital ties with Beijing.
President Kassym-Jomart Tokayev voiced his backing during the Kazakh-Chinese Business Council meeting in Beijing, following his participation in the Shanghai Cooperation Organisation summit in Tianjin.
Tokayev stressed that joint efforts in AI were vital as experts predict the global market could reach $5 trillion by 2033, accounting for nearly one-third of the technology sector. He praised China’s digital achievements and urged bilateral collaboration in emerging technologies.
Kazakhstan has taken notable steps to position itself as a regional digital hub, launching Central Asia’s first supercomputer and the AlemAI International Centre for AI earlier this year.
Tokayev added that partnerships with Chinese firms, including a major construction agreement, would accelerate the development of Alatau City as a separate innovation ecosystem.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
IBM has announced plans to develop next-generation computing architectures by integrating quantum computers with high-performance computing, a concept it calls quantum-centric supercomputing.
The company is working with AMD to build scalable, open-source platforms that combine IBM’s quantum expertise with AMD’s strength in HPC and AI accelerators. The aim is to move beyond the limits of traditional computing and explore solutions to problems that classical systems cannot address alone.
Quantum computing uses qubits governed by quantum mechanics, offering a far richer computational space than binary bits. In a hybrid model, quantum machines could simulate atoms and molecules, while supercomputers powered by CPUs, GPUs, and AI manage large-scale data analysis.
Arvind Krishna, IBM’s CEO, said the approach represents a new way of simulating the natural world. AMD’s Lisa Su described high-performance computing as foundational to tackling global challenges, noting the partnership could accelerate discovery and innovation.
An initial demonstration is planned for later this year, showing IBM quantum computers working with AMD technologies. Both companies say open-source ecosystems like Qiskit will be crucial to building new algorithms and advancing fault-tolerant quantum systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Elon Musk’s AI chatbot, Grok, has faced repeated changes to its political orientation, with updates shifting its answers towards more conservative views.
xAI, Musk’s company, initially promoted Grok as neutral and truth-seeking, but internal prompts have steered it on contentious topics. Adjustments included portraying declining fertility as the greatest threat to civilisation and downplaying right-wing violence.
Analyses of Grok’s responses by The New York Times showed that the July updates shifted answers to the right on government and economy, while some social responses remained left-leaning. Subsequent tweaks pulled it back closer to neutrality.
Critics say that system prompts, such as short instructions like ‘be politically incorrect’, make it easy to adjust outputs, but also leave the model prone to erratic or offensive responses. A July update saw Grok briefly endorse a controversial historical figure before xAI turned it off.
The case highlights growing concerns about political bias in AI systems. Researchers argue that all chatbots reflect the worldviews of their training data, while companies increasingly face pressure to align them with user expectations or political demands.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Exeter technology firm Rapid Fusion has introduced an AI-powered print assistant to enhance its robotic additive manufacturing systems. Known as Bob, the system has been in development for eight months and is now being rolled out to clients.
The AI aims to simplify machine operation, provide greater control and reduce downtime through predictive maintenance.
It is compatible with the company’s Apollo, Zeus and Medusa models, including the first UK large-format hybrid 3D gantry printer.
Rapid Fusion’s chief technology officer, Martin Jewell, said the system represents a breakthrough in making complex 3D printing more accessible. A standard version will be released in early 2026, while select partners and universities will act as super users to refine future updates.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Reports that Gmail suffered a massive breach have been dismissed by Google, which said rumours of warnings to 2.5 billion users were false.
In a Monday blog post, Google rejected claims that it had issued global notifications about a serious Gmail security issue. It stressed that its protections remain effective against phishing and malware.
Confusion stems from a June incident involving a Salesforce server, during which attackers briefly accessed public business information, including names and contact details. Google said all affected parties were notified by early August.
The company acknowledged that phishing attempts are increasing, but clarified that Gmail’s defences block more than 99.9% of such attempts. A July blog post on phishing risks may have been misinterpreted as evidence of a breach.
Google urged users to remain vigilant, recommending password alternatives such as passkeys and regular account reviews. While the false alarm spurred unnecessary panic, security experts noted that updating credentials remains good practice.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Cybersecurity experts have warned that AI is being used to target senior citizens in sophisticated financial scams. The Phantom Hacker scam impersonates tech support, bank, and government workers to steal seniors’ life savings.
The first stage involves a fake tech support worker accessing the victim’s computer to check accounts under the pretence of spotting fraud. A fraud department impersonator then tells victims to transfer funds to a ‘safe’ account allegedly at risk from foreign hackers.
A fake government worker then directs the victim to transfer money to an alias account controlled by the scammers. Check Point CIO Pete Nicoletti says AI helps scammers identify targets by analysing social media and online activity.
Experts stress that reporting the theft immediately is crucial. Delays significantly reduce the chance of recovering stolen funds, leaving many victims permanently defrauded.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Private and public businesses are acquiring Bitcoin nearly four times faster than miners are producing new coins, according to financial services firm River.
Companies, including publicly traded treasuries and private businesses, purchased an average of 1,755 BTC daily in 2025, with ETFs adding 1,430 BTC daily. Governments also joined in, buying about 39 BTC per day.
In contrast, miners produced just 450 BTC daily, raising fears of a potential supply crunch.
In the second quarter of 2025 alone, treasury companies acquired 159,107 BTC, bringing business holdings to around 1.3 million BTC. Michael Saylor’s firm Strategy leads with a corporate reserve of 632,457 BTC, making it the largest known holder.
Strategy says aggressive buying does not affect short-term prices, as most transactions are handled off-exchange through OTC markets. Analysts, however, continue to speculate that dwindling exchange reserves could become a powerful bullish force.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A top European regulator has warned that tokenised stocks could mislead investors and undermine confidence in financial markets. Natasha Cazenave of ESMA said many tokenised stocks, like voting or dividends, lack shareholder rights.
Unlike traditional equities, tokenised stocks are typically issued through intermediaries and merely track share prices. Cazenave cautioned that retail investors may wrongly believe they own company shares, exposing them to a risk of misunderstanding.
Her warning follows the expansion of tokenised stock services on platforms like Robinhood and Kraken.
The World Federation of Exchanges recently echoed these concerns, urging regulators to strengthen oversight. Without intervention, the group warned that tokenised products could threaten market integrity and heighten investor risks.
Although advocates say tokenisation could cut costs and widen access, Cazenave noted most projects remain small, illiquid, and far from delivering promised efficiency. Regulators, she added, remain focused on balancing innovation with investor protection.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
OpenAI has confirmed that ChatGPT conversations signalling a risk of serious harm to others can be reviewed by human moderators and may even reach the police.
The company explained these measures in a blog post, stressing that its system is designed to balance user privacy with public safety.
The safeguards treat self-harm differently from threats to others. When a user expresses suicidal intent, ChatGPT directs them to professional resources instead of contacting law enforcement.
By contrast, conversations showing intent to harm someone else are escalated to trained moderators, and if they identify an imminent risk, OpenAI may alert authorities and suspend accounts.
The company admitted its safety measures work better in short conversations than in lengthy or repeated ones, where safeguards can weaken.
OpenAI is working to strengthen consistency across interactions and developing parental controls, new interventions for risky behaviour, and potential connections to professional help before crises worsen.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!