Trump unveils gold smartphone and new 5G wireless service

US President Donald Trump and his sons have launched a mobile phone service called Trump Mobile 5G, alongside plans to release a gold-coloured smartphone branded with the Trump name.

The service is being offered through partnerships with all three major US mobile networks, though they are not named directly.

The monthly plan, known as the ’47 Plan’, costs $47.45- referencing Trump’s position as the 45th and 47th president. Customers can join their current Android or iPhone devices, either with a physical SIM or an eSIM.

A new Trump-branded Android device, the T1, will launch in September. Priced at $499, it comes with Android 15, a 6.8-inch screen and biometric features like fingerprint scanning and AI facial recognition.

At a press event in New York, Donald Trump Jr. and Eric Trump introduced the initiative, saying it would combine high-quality service with an ‘America First’ approach.

They emphasised that the company is US-based, including its round-the-clock customer service, which promises real human support instead of automated systems.

While some critics may see the move as political branding, the Trump Organisation framed it as a business venture.

The company has already earned hundreds of millions from Trump-branded consumer goods. As with other mobile providers, the new service will fall under the regulatory oversight of the Federal Communications Commission, led by a Trump-appointed chair.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT and generative AI have polluted the internet — and may have broken themselves

The explosion of generative AI tools like ChatGPT has flooded the internet with low-quality, AI-generated content, making it harder for future models to learn from authentic human knowledge.

As AI continues to train on increasingly polluted data, a loop forms in which AI imitates already machine-made content, leading to a steady drop in originality and usefulness. The worrying trend is referred to as ‘model collapse’.

To illustrate the risk, researchers compare clean pre-AI data to ‘low-background steel’ — a rare kind of steel made before nuclear testing in 1945, which remains vital for specific medical and scientific uses.

Just as modern steel became contaminated by radiation, modern data is being tainted by artificial content. Cambridge researcher Maurice Chiodo notes that pre-2022 data is now seen as ‘safe, fine, clean’, while everything after is considered ‘dirty’.

A key concern is that techniques like retrieval-augmented generation, which allow AI to pull real-time data from the internet, risk spreading even more flawed content. Some research already shows that it leads to more ‘unsafe’ outputs.

If developers rely on such polluted data, scaling models by adding more information becomes far less effective, potentially hitting a wall in progress.

Chiodo argues that future AI development could be severely limited without a clean data reserve. He and his colleagues urge the introduction of clear labelling and tighter controls on AI content.

However, industry resistance to regulation might make meaningful reform difficult, raising doubts about whether the pollution can be reversed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Armenia plans major AI hub with NVIDIA and Firebird

Armenia has unveiled plans to develop a $500mn AI supercomputing hub in partnership with US tech leader NVIDIA, AI cloud firm Firebird, and local telecoms group Team.

Announced at the Viva Technology conference in Paris, the initiative marks the largest tech investment ever seen in the South Caucasus.

Due to open in 2026, the facility will house thousands of NVIDIA’s Blackwell GPUs and offer more than 100 megawatts of scalable computing power. Designed to advance AI research, training and entrepreneurship, the hub aims to position Armenia as a leading player in global AI development.

Prime Minister Nikol Pashinyan described the project as the ‘Stargate of Armenia’, underscoring its potential to transform the national tech sector.

Firebird CEO Razmig Hovaghimian said the hub would help develop local talent and attract international attention, while the Afeyan Foundation, led by Noubar Afeyan, is set to come on board as a founding investor.

Instead of limiting its role to funding, the Armenian government will also provide land, tax breaks and simplified regulation to support the project, strengthening its push toward a competitive digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Amazon launches AU$ 20 bn investment in Australian solar-powered data centres

Amazon will invest AU$ 20 billion to expand its data centre infrastructure in Australia, using solar and wind power instead of traditional energy sources.

The plan includes power purchase agreements with three utility-scale solar plants developed by European Energy, one of which—Mokoan Solar Park in Victoria—is already operational. The other two projects, Winton North and Bullyard Solar Parks, are expected to lift total solar capacity to 333MW.

The investment supports Australia’s aim to enhance its cloud and AI capabilities. Amazon’s commitment includes purchasing over 170MW of power from these projects, contributing to both data centre growth and the country’s renewable energy transition.

According to the International Energy Agency, electricity demand from data centres is expected to more than double by 2030, driven by AI.

Amazon Web Services CEO Matt Garman said the move positions Australia to benefit from AI’s economic potential. The company, already active in solar projects across New South Wales, Queensland and Victoria, continues to prioritise renewables to decarbonise operations and meet surging energy needs.

Instead of pursuing growth through conventional means, Amazon’s focus on clean energy could set a precedent for other tech giants expanding in the region.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI turns to Google Cloud in shift from solo AI race

OpenAI has entered into an unexpected partnership with Google, using Google Cloud to support its growing AI infrastructure needs.

Despite being fierce competitors in AI, the two tech giants recognise that long-term success may require collaboration instead of isolation.

As the demand for high-performance hardware soars, traditional rivals join forces to keep pace. OpenAI, previously backed heavily by Microsoft, now draws from Google’s vast cloud resources, hinting at a changing attitude in the AI race.

Rather than going it alone, firms may benefit more by leveraging each other’s strengths to accelerate development.

Google CEO Sundar Pichai, speaking on a podcast, suggested there is room for multiple winners in the AI sector. He even noted that a major competitor had ‘invited me to a dance’, underscoring a new phase of pragmatic cooperation.

While Google still faces threats to its search dominance from tools like ChatGPT, business incentives may override rivalry.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI health tools need clinicians to prevent serious risks, Oxford study warns

The University of Oxford has warned that AI in healthcare, primarily through chatbots, should not operate without human oversight.

Researchers found that relying solely on AI for medical self-assessment could worsen patient outcomes instead of improving access to care. The study highlights how these tools, while fast and data-driven, fall short in delivering the judgement and empathy that only trained professionals can offer.

The findings raise alarm about the growing dependence on AI to fill gaps caused by doctor shortages and rising costs. Chatbots are often seen as scalable solutions, but without rigorous human-in-the-loop validation, they risk providing misleading or inconsistent information, particularly to vulnerable groups.

Rather than helping, they might increase health disparities by delaying diagnosis or giving patients false reassurance.

Experts are calling for safer, hybrid approaches that embed clinicians into the design and ongoing use of AI tools. The Oxford researchers stress that continuous testing, ethical safeguards and clear protocols must be in place.

Instead of replacing clinical judgement, AI should support it. The future of digital healthcare hinges not just on innovation but on responsibility and partnership between technology and human care.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Taiwan tightens rules on chip shipments to China

Taiwan has officially banned the export of chips and chiplets to China’s Huawei and SMIC, joining the US in tightening restrictions on advanced semiconductor transfers.

The decision follows reports that TSMC, the world’s largest contract chipmaker, was unknowingly misled into supplying chiplets used in Huawei’s Ascend 910B AI accelerator. The US Commerce Department had reportedly considered a fine of over $1 billion against TSMC for that incident.

Taiwan’s new rules aim to prevent further breaches by requiring export permits for any transactions with Huawei or SMIC.

The distinction between chips and chiplets is key to the case. Traditional chips are built as single-die monoliths using the same process node, while chiplets are modular and can combine various specialised components, such as CPU or AI cores.

Huawei allegedly used shell companies to acquire chiplets from TSMC, bypassing existing US restrictions. If TSMC had known the true customer, it likely would have withheld the order. Taiwan’s new export controls are designed to ensure stricter oversight of future transactions and prevent repeat deceptions.

The broader geopolitical stakes are clear. Taiwan views the transfer of advanced chips to China as a national security threat, given Beijing’s ambitions to reunify with Taiwan and the potential militarisation of high-end semiconductors.

With Huawei claiming its processors are nearly on par with Western chips—though analysts argue they lag two to three generations behind—the export ban could further isolate China’s chipmakers.

Speculation persists that Taiwan’s move was partly influenced by negotiations with the US to avoid the proposed fine on TSMC, bringing both countries into closer alignment on chip sanctions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSA and allies set AI data security standards

The National Security Agency (NSA), in partnership with cybersecurity agencies from the UK, Australia, New Zealand, and others, has released new guidance aimed at protecting the integrity of data used in AI systems.

The Cybersecurity Information Sheet (CSI), titled AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems, outlines emerging threats and sets out 10 recommendations for mitigating them.

The CSI builds on earlier joint guidance from 2024 and signals growing global urgency around safeguarding AI data instead of allowing systems to operate without scrutiny.

The report identifies three core risks across the AI lifecycle: tampered datasets in the supply chain, deliberately poisoned data intended to manipulate models, and data drift—where changes in data over time reduce performance or create new vulnerabilities.

These threats may erode accuracy and trust in AI systems, particularly in sensitive areas like defence, cybersecurity, and critical infrastructure, where even small failures could have far-reaching consequences.

To reduce these risks, the CSI recommends a layered approach—starting with sourcing data from reliable origins and tracking provenance using digital credentials. It advises encrypting data at every stage, verifying integrity with cryptographic tools, and storing data securely in certified systems.

Additional measures include deploying zero trust architecture, using digital signatures for dataset updates, and applying access controls based on data classification instead of relying on broad administrative trust.

The CSI also urges ongoing risk assessments using frameworks like NIST’s AI RMF, encouraging organisations to anticipate emerging challenges such as quantum threats and advanced data manipulation.

Privacy-preserving techniques, secure deletion protocols, and infrastructure controls round out the recommendations.

Rather than treating AI as a standalone tool, the guidance calls for embedding strong data governance and security throughout its lifecycle to prevent compromised systems from shaping critical outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fake DeepSeek ads deliver ‘BrowserVenom’ malware to curious AI users

Cybercriminals are exploiting the surge in interest around local AI tools by spreading a new malware strain via Google ads.

According to antivirus firm Kaspersky, attackers use fake ads for DeepSeek’s R1 AI model to deliver ‘BrowserVenom,’ malware designed to intercept and manipulate a user’s internet traffic instead of merely infecting the device.

The attackers purchased ads appearing in Google search results for ‘deep seek r1.’ Users who clicked were redirected to a fake website—deepseek-platform[.]com—which mimicked the official DeepSeek site and offered a file named AI_Launcher_1.21.exe.

Kaspersky’s analysis of the site’s source code uncovered developer notes in Russian, suggesting the campaign is operated by Russian-speaking actors.

Once launched, the fake installer displayed a decoy installation screen for the R1 model, but silently deployed malware that altered browser configurations.

BrowserVenom rerouted web traffic through a proxy server controlled by the hackers, allowing them to decrypt browsing sessions and capture sensitive data, while evading most antivirus tools.

Kaspersky reports confirmed infections across multiple countries, including Brazil, Cuba, India, and South Africa.

The malicious domain has since been taken down. However, the incident highlights the dangers of downloading AI tools from unofficial sources. Open-source models like DeepSeek R1 require technical setup, typically involving multiple configuration steps, instead of a simple Windows installer.

As interest in running local AI grows, users should verify official domains and avoid shortcuts that could lead to malware.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!