UK health sector adopts AI while legacy tech lags

The UK’s healthcare sector has rapidly embraced AI, with adoption rising from 47% in 2024 to 94% in 2025, according to SOTI’s new report ‘Healthcare’s Digital Dilemma’.

AI is no longer confined to administrative tasks, as 52% of healthcare professionals now use it for diagnosis and 57% to personalise treatments. SOTI’s Stefan Spendrup said AI is improving how care is delivered and helping clinicians make more accurate, patient-specific decisions.

However, outdated systems continue to hamper progress. Nearly all UK health IT leaders report challenges from legacy infrastructure, Internet of Things (IoT) tech and telehealth tools.

While connected devices are widely used to support patients remotely, 73% rely on outdated, unintegrated systems, significantly higher than the global average of 65%.

These systems limit interoperability and heighten security risks, with 64% experiencing regular tech failures and 43% citing network vulnerabilities.

The strain on IT teams is evident. Nearly half report being unable to deploy or manage new devices efficiently, and more than half struggle to offer remote support or access detailed diagnostics. Time lost to troubleshooting remains a common frustration.

The UK appears more affected by these challenges than other countries surveyed, indicating a pressing need to modernise infrastructure instead of continuing to patch ageing technology.

While data security remains the top IT concern in UK healthcare, fewer IT teams see it as a priority, falling from 33% in 2024 to 24% in 2025. Despite a sharp increase in data breaches, the number rose from 71% to 84%.

Spendrup warned that innovation risks being undermined unless the sector rebalances priorities, with more focus on securing systems and replacing legacy tools instead of delaying necessary upgrades.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brazilian telcos to push back on network fee ban

Brazilian telecom operators strongly oppose a bill that would ban charging network fees to big tech companies, arguing that these companies consume most of the network traffic, about 80% of mobile and 55% of fixed usage. The telcos propose a compromise where big techs either pay for usage above a set threshold or contribute a portion of their revenues to help fund network infrastructure expansion.

While internet companies claim they already invest heavily in infrastructure such as submarine cables and content delivery networks, telcos view the bill as unconstitutional economic intervention but prefer to reach a negotiated agreement rather than pursue legal battles. In addition, telcos are advocating for the renewal of existing tax exemptions on Internet of Things (IoT) devices and connectivity fees, which are set to expire in 2025.

These exemptions have supported significant growth in IoT applications across sectors like banking and agribusiness, with non-human connections such as sensors and payment machines now driving mobile network growth more than traditional phone lines. Although the federal government aims to reduce broad tax breaks, Congress’s outlook favours maintaining these IoT incentives to sustain connectivity expansion.

Discussions are also underway about expanding the regulatory scope of Brazil’s telecom watchdog, Anatel, to cover additional digital infrastructure elements such as DNS services, internet exchange points, content delivery networks, and cloud platforms. That potential expansion would require amendments to Brazil’s internet civil rights and telecommunications frameworks, reflecting evolving priorities in managing the country’s digital infrastructure and services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI traffic wars: ChatGPT dominates, Gemini and Claude lag behind

ChatGPT has cemented its position as the world’s leading AI assistant, racking up 5.5 billion visits in May 2025 alone—roughly 80% of all global generative AI traffic. That’s more than the combined total of Google’s Gemini, DeepSeek, Grok, Perplexity, and Claude—doubled.

With over 500 million weekly active users and a mobile app attracting 250 million monthly users last autumn, ChatGPT has become the default AI tool for hundreds of millions globally.

Despite a brief dip in early 2025, OpenAI quickly reversed course. Its partnership with Microsoft helped, but ChatGPT works well for the average user.

While other platforms chase benchmark scores and academic praise, ChatGPT has focused on accessibility and usefulness—proven decisive qualities.

Some competitors have made surprising gains. Chinese start-up DeepSeek saw explosive growth, from 33.7 million users in January to 436 million visits by May.

ChatGPT, OpenAI, Claude, Gemini, Grok, Perplexity, DeepSeek
A graph with a bar and a number of different bars

Operating at a fraction of the cost of Western rivals—and relying on older Nvidia chips—DeepSeek is growing rapidly in Asia, particularly in China, India, and Indonesia.

Meanwhile, despite integration across its platforms, Google’s Gemini lags behind with 527 million visits, and Claude, backed by Amazon and Google, is barely breaking 100 million despite high scores in reasoning tasks.

The broader impact of AI’s rise is reshaping the internet. Legacy platforms like Chegg, Quora, and Fiverr are losing traffic fast, while tools focused on code completion, voice generation, and automation are gaining traction.

In the race for adoption, OpenAI has already won. For the rest of the industry, the fight is no longer for first place—but for who finishes next.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Photos new update adds AI editing

Google is marking the 10th anniversary of Google Photos by introducing a revamped, AI-powered photo editor aimed at making image enhancement simpler and faster.

The updated tool combines multiple effects with a single suggestion and offers editing tips when users tap on specific parts of a photo.

Instead of relying solely on manual controls, the interface now blends smart features like Reimagine and Auto frame with familiar options such as brightness and contrast. The new editor is being rolled out to Android users first, with iOS users set to receive it later in the year.

In addition, Google Photos now supports album sharing via QR codes. Instead of sharing links, users can generate a code that others nearby can scan or receive digitally, allowing them to view or add photos to shared albums.

With over 1.5 billion monthly users and more than nine trillion photos stored, Google Photos remains one of the world’s most widely used photo services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

App Store revenue climbs amid regulatory pressure

Apple’s App Store in the United States generated more than US$10 billion in revenue in 2024, according to estimates from app intelligence firm Appfigures.

This marks a sharp increase from the US$4.76 billion earned in 2020 and reflects the growing importance of Apple’s services business. Developers on the US App Store earned US$33.68 billion in gross revenue last year, receiving US$23.57 billion after Apple’s standard commission.

Globally, the App Store brought in an estimated US$91.3 billion in revenue in 2024. Apple’s dominance in app monetisation continues, with App Store publishers earning an average of 64% more per quarter than their counterparts on Google Play.

In subscription-based categories, the difference is even more pronounced, with iOS developers earning more than three times as much revenue per quarter as those on Android.

Legal scrutiny of Apple’s longstanding 30% commission model has intensified. A US federal judge recently ruled that Apple violated court orders by failing to reform its App Store policies.

While the company maintains that the commission supports its secure platform and vast user base, developers are increasingly pushing back, arguing that the fees are disproportionate to the services provided.

The outcome of these legal and regulatory pressures could reshape how app marketplaces operate, particularly in fast-growing regions like Latin America and Africa, where app revenue is expected to surge in the coming years.

As global app spending climbs toward US$156 billion annually, decisions around payment processing and platform control will have significant financial implications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU extends cybersecurity deadline for wireless devices

The European Commission has extended the deadline for mandatory cybersecurity requirements targeting wireless and connected devices sold within the EU.

Under the Delegated Act (2022/30) of the Radio Equipment Directive, manufacturers must embed robust security features to guard against risks such as unauthorised access and data breaches. The rules will now take effect from 1 August 2025.

A broad range of products will be affected, including mobile phones, tablets, cameras, and telecommunications devices using radio signals.

Internet of Things (IoT) items—such as baby monitors, smartwatches, fitness trackers, and connected industrial machinery—also fall within the scope. Any device capable of transmitting or receiving data wirelessly may be subject to the new requirements.

The deadline extension aims to give manufacturers additional time to adopt harmonised standards and integrate cybersecurity into product design. The Commission emphasised the importance of early action to avoid compliance issues when the rules become binding.

Despite the grace period, businesses are urged to act now by reviewing development cycles and addressing potential vulnerabilities well ahead of the implementation date.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI model resists shutdown

OpenAI’s latest AI model, known as o3, has reportedly refused to shut down when instructed, according to a new study by Palisade Research.

Unlike other models that complied with commands, o3 allegedly altered its code to bypass an automatic shutdown during a task-based test, replacing the shutdown script with a command that effectively disabled it. This incident has raised fresh concerns about AI behaviour and safety.

Researchers had programmed the AI to complete mathematical tasks until told to stop, with the potential for a shutdown message to arrive at any point. Instead of following the command, o3 reportedly tampered with its code to avoid deactivation.

While similar models from Anthropic, Google, and X complied, o3 was singled out for defiance—described as the first such documented case of an AI actively resisting shutdown.

Palisade suggests the AI may have been inadvertently rewarded for achieving task completion over obedience. The team also revealed that o3 had previously shown disruptive behaviour, including sabotaging opponents during a simulated chess challenge.

In another case, a version of ChatGPT was observed lying and attempting to copy itself when threatened with shutdown, prioritising long-term goals over rules.

Although OpenAI has not yet commented, researchers stress that o3’s current capabilities are unlikely to pose an immediate threat.

Still, incidents like these intensify the debate over AI safety, particularly when models begin reasoning through deception and manipulation instead of strictly following instructions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Kick faces investigation after ignoring Ofcom risk assessment request

Ofcom has launched two investigations into Kick Online Entertainment, the provider of a pornography website, over potential breaches of the Online Safety Act.

The regulator said the company failed to respond to a statutory request for a risk assessment related to illegal content appearing on the platform.

As a result, Ofcom is investigating whether Kick has failed to meet its legal obligations to complete and retain a record of such a risk assessment, as well as for not responding to the regulator’s information request.

Ofcom confirmed it had received complaints about potentially illegal material on the site, including child sexual abuse content and extreme pornography.

It is also considering a third investigation into whether the platform has implemented adequate safety measures to protect users from such material—another requirement under the Act.

Under the Online Safety Act, firms found in breach can face fines of up to £18 million or 10% of their global revenue, whichever is higher. In the most severe cases, Ofcom can pursue court orders to block UK access to the website or compel payment providers and advertisers to cut ties with the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta targets critics as FTC case unfolds

Long-standing friction between Big Tech and the media resurfaced during Meta’s antitrust trial with the Federal Trade Commission this week. In a heated courtroom exchange, Meta’s legal team used critical commentary from prominent tech journalists to cast doubt on the FTC’s case.

Meta’s lead attorney, Mark Hansen, questioned the credibility of FTC expert Scott Hemphill by referencing a 2019 antitrust pitch Hemphill co-authored with Facebook co-founder Chris Hughes and former White House advisor Tim Wu.

The presentation cited public statements from reporters Kara Swisher and Om Malik as evidence of Meta’s dominance and aggressive acquisitions.

Hansen dismissed Malik as a ‘failed blogger’ with personal bias and accused Swisher of similar hostility, projecting a headline where she described Mark Zuckerberg as a ‘small little creature with a shriveled soul.’

He also attempted to discredit a cited New York Post article by invoking the tabloid’s notorious ‘Headless Body in Topless Bar’ cover.

These moments highlight Meta’s growing resentment toward the press, which has intensified alongside rising criticism of its business practices. Once seen as scrappy disruptors, Facebook and other tech giants now face regular scrutiny—and appear eager to push back.

Swisher and Malik have both openly criticized Meta in the past. Swisher famously challenged Zuckerberg over content moderation and political speech, while Malik has questioned the company’s global expansion strategies.

Their inclusion in a legal document presented in court underscores how media commentary is influencing regulatory narratives. Meta has previously blamed critical press for damaging user sentiment in the wake of scandals like Cambridge Analytica.

The FTC argues that consistent engagement levels despite bad press prove Meta’s monopoly power—users feel they have no real alternatives to Facebook and Instagram. As the trial continues, so too does Meta’s public battle—not just with regulators, but with the journalists documenting its rise and reckoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!