Telegram-powered TON on track for mass adoption

TON, the blockchain natively embedded in Telegram’s app, is emerging as the most practical path to mainstream crypto adoption. With over 900 million users on Telegram and more than 150 million TON accounts created, the platform is delivering Web3 features through a familiar, app-like experience.

Unlike Ethereum or Solana, which require external wallets and technical knowledge, TON integrates features like tipping, staking, and gaming directly into Telegram. Mini apps like Notcoin and Catizen let users access blockchain without dealing with wallets or gas fees.

TON currently processes around 2 million daily transactions and may reach over 10 million daily users by 2027. Growing user fatigue with complex blockchain makes TON’s simple, mobile-first design ready to lead the next adoption wave.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cybersecurity sector sees busy July for mergers

July witnessed a significant surge in cybersecurity mergers and acquisitions (M&A), spearheaded by Palo Alto Networks’ announcement of its definitive agreement to acquire identity security firm CyberArk for an estimated $25 billion.

The transaction, set to be the second-largest cybersecurity acquisition on record, signals Palo Alto’s strategic entry into identity security.

Beyond this significant deal, Palo Alto Networks also completed its purchase of AI security specialist Protect AI. The month saw widespread activity across the sector, including LevelBlue’s acquisition of Trustwave to create the industry’s largest pureplay managed security services provider.

Zurich Insurance Group, Signicat, Limerston Capital, Darktrace, Orange Cyberdefense, SecurityBridge, Commvault, and Axonius all announced or finalised strategic cybersecurity acquisitions.

The deals highlight a strong market focus on AI security, identity management, and expanding service capabilities across various regions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI pulls searchable chats from ChatGPT

OpenAI has removed a feature that allowed users to make their ChatGPT conversations publicly searchable, following backlash over accidental exposure of sensitive content.

Dane Stuckey, OpenAI’s CISO, confirmed the rollback on Thursday, describing it as a short-lived experiment meant to help users find helpful conversations. However, he acknowledged that the feature posed privacy risks.

‘Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,’ Stuckey wrote in a post on X. He added that OpenAI is working to remove any indexed content from search engines.

The move came swiftly after Fast Company and privacy advocate Luiza Jarovsky reported that some shared conversations were appearing in Google search results.

Jarovsky posted examples on X, noting that even though the chats were anonymised, users were unknowingly revealing personal experiences, including harassment and mental health struggles.

To activate the feature, users had to tick a box allowing their chat to be discoverable. While the process required active steps, critics warned that some users might opt in without fully understanding the consequences. Stuckey said the rollback will be complete by Friday morning.

The incident adds to growing concerns around AI and user privacy, particularly as conversational platforms like ChatGPT become more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Delta’s personalised flight costs under scrutiny

Delta Air Lines’ recent revelation about using AI to price some airfares is drawing significant criticism. The airline aims to increase AI-influenced pricing to 20 per cent of its domestic flights by late 2025.

While Delta’s president, Glen Hauenstein, noted positive results from their Fetcherr-supplied AI tool, industry observers and senators are voicing concerns. Critics worry that AI-driven pricing, similar to rideshare surge models, could lead to increased fares for travellers and raise serious data privacy issues.

Senators like Ruben Gallego, Mark Warner, and Richard Blumenthal, highlighted fears that ‘surveillance pricing’ could utilise extensive personal data to estimate a passenger’s willingness to pay.

Despite Delta’s spokesperson denying individualised pricing based on personal information, AI experts suggest factors like device type and Browse behaviour are likely influencing prices, making them ‘deeply personalised’.

Different travellers could be affected unevenly. Bargain hunters with flexible dates might benefit, but business travellers and last-minute bookers may face higher costs. Other airlines like Virgin Atlantic also use Fetcherr’s technology, indicating a wider industry trend.

Pricing experts like Philip Carls warn that passengers won’t know if they’re getting a fair deal, and proving discrimination, even if unintended by AI, could be almost impossible.

American Airlines’ CEO, Robert Isom, has publicly criticised Delta’s move, stating American won’t copy the practice, though past incidents show airlines can adjust fares based on booking data even without AI.

With dynamic pricing technology already permitted, experts anticipate lawmakers will soon scrutinise AI’s role more closely, potentially leading to new transparency mandates.

For now, travellers can try strategies like using incognito mode, clearing cookies, or employing a VPN to obscure their digital footprint and potentially avoid higher AI-driven fares.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gulf states reframe AI as the ‘new oil’ in post‑petroleum push

Gulf states are actively redefining national strategy by embracing AI as a cornerstone of post-oil modernization. Saudi Arabia, through its AI platform Humain, a subsidiary of the Public Investment Fund, has committed state resources to build core infrastructure and develop Arabic multimodal models. Concurrently, the UAE is funding its $100 billion MGX initiative and supporting projects like G42 and the Falcon open-source model from Abu Dhabi’s Technology Innovation Institute.

Economic rationale underpins this ambition. Observers suggest that broad AI adoption across GCC sectors, including energy, healthcare, aviation, and government services, could add as much as $150 billion to regional GDP. Yet, concerns persist around workforce limitations, regulatory maturation, and geopolitical complications tied to supply chain dependencies.

Interest in AI has also reached geopolitical levels. Gulf leaders have struck partnerships with US firms to secure advanced AI chips and infrastructure, as seen during high-profile agreements with Nvidia, AMD, and Amazon. Critics caution that hosting major data centres in geopolitically volatile zones introduces physical and strategic risks, especially in contexts of rising regional tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China says the US used a Microsoft server vulnerability to launch cyberattacks

China has accused the US of exploiting long-known vulnerabilities in Microsoft Exchange servers to launch cyberattacks on its defence sector, escalating tensions in the ongoing digital arms race between the two superpowers.

In a statement released on Friday, the Cyber Security Association of China claimed that US hackers compromised servers belonging to a significant Chinese military contractor, allegedly maintaining access for nearly a year.

The group did not disclose the name of the affected company.

The accusation is a sharp counterpunch to long-standing US claims that Beijing has orchestrated repeated cyber intrusions using the same Microsoft software. In 2021, Microsoft attributed a wide-scale hack affecting tens of thousands of Exchange servers to Chinese threat actors.

Two years later, another incident compromised the email accounts of senior US officials, prompting a federal review that criticised Microsoft for what it called a ‘cascade of security failures.’

Microsoft, based in Redmond, Washington, has recently disclosed additional intrusions by China-backed groups, including attacks exploiting flaws in its SharePoint platform.

Jon Clay of Trend Micro commented on the tit-for-tat cyber blame game: ‘Every nation carries out offensive cybersecurity operations. Given the latest SharePoint disclosure, this may be China’s way of retaliating publicly.’

Cybersecurity researchers note that Beijing has recently increased its use of public attribution as a geopolitical tactic. Ben Read of Wiz.io pointed out that China now uses cyber accusations to pressure Taiwan and shape global narratives around cybersecurity.

In April, China accused US National Security Agency (NSA) employees of hacking into the Asian Winter Games in Harbin, targeting personal data of athletes and organisers.

While the US frequently names alleged Chinese hackers and pursues legal action against them, China has historically avoided levelling public allegations against American intelligence agencies, until now.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China’s Silk Typhoon hackers filed patents for advanced spyware tools

A Chinese state-backed hacking group known as Silk Typhoon has filed more than ten patents for intrusive cyberespionage tools, shedding light on its operations’ vast scope and sophistication.

These patents, registered by firms linked to China’s Ministry of State Security, detail covert data collection software far exceeding the group’s previously known attack methods.

The revelations surfaced following a July 2025 US Department of Justice indictment against two alleged members of Silk Typhoon, Xu Zewei and Zhang Yu.

Both are associated with companies tied to the Shanghai State Security Bureau and connected to the Hafnium group, which Microsoft rebranded as Silk Typhoon in 2022.

Instead of targeting only Windows environments, the patent filings reveal a sweeping set of surveillance tools designed for Apple devices, routers, mobile phones, and even smart home appliances.

Submissions include software for bypassing FileVault encryption, extracting remote cellphone data, decrypting hard drives, and analysing smart devices. Analysts from SentinelLabs suggest these filings offer an unprecedented glimpse into the architecture of China’s cyberwarfare ecosystem.

Silk Typhoon gained global attention in 2021 with its Microsoft Exchange ProxyLogon campaign, which prompted a rare coordinated condemnation by the US, UK, and EU. The newly revealed capabilities show the group’s operations are far more advanced and diversified than previously believed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Nscale to build an AI super hub in Norway

OpenAI has revealed its first European data centre project in partnership with British startup Nscale, selecting Norway as the location for what is being called ‘Stargate Norway’.

The initiative mirrors the company’s ambitious $500 billion US ‘Stargate’ infrastructure plan and reflects Europe’s growing demand for large-scale AI computing capacity.

Nscale will lead the development of a $1 billion AI gigafactory in Norway, with engineering firm Aker matching the investment. These advanced data centres are designed to meet the heavy processing requirements of cutting-edge AI models.

OpenAI expects the facility to deliver 230MW of computing power by the end of 2026, making it a significant strategic foothold for the company on the continent.

Sam Altman, CEO of OpenAI, stated that Europe needs significantly more computing to unlock AI’s full potential for researchers, startups, and developers. He said Stargate Norway will serve as a cornerstone for driving innovation and economic growth in the region.

Nscale confirmed that Norway’s AI ecosystem will receive priority access to the facility, while remaining capacity will be offered to users across the UK, Nordics and Northern Europe.

The data centre will support 100,000 of NVIDIA’s most advanced GPUs, with long-term plans to scale as demand grows.

The move follows broader European efforts to strengthen AI infrastructure, with the UK and France pushing for major regulatory and funding reforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!