Apple accused of blocking real browser competition on iOS

Developers and open web advocates say Apple continues to restrict rival browser engines on iOS, despite obligations under the EU’s Digital Markets Act. While Apple claims to allow competition, groups like Open Web Advocacy argue that technical and logistical hurdles still block real implementation.

The controversy centres on Apple’s refusal to allow developers to release region-specific browser versions or test new engines outside the EU. Developers must abandon global apps or persuade users to switch manually to new EU-only versions, creating friction and reducing reach.

Apple insists it upholds security and privacy standards built over 18 years and claims its new framework enables third-party browsers. However, critics say those browsers cannot be tested or deployed realistically without access for developers outside the EU.

The EU held a DMA compliance workshop in Brussels in June, during which tensions surfaced between Apple’s legal team and advocates. Apple says it is still transitioning and working with firms like Mozilla and Google on limited testing updates, but has offered no timeline for broader changes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mastercard says stablecoins are not ready for everyday payments

Mastercard’s Chief Product Officer, Jorn Lambert, has highlighted that stablecoins still face significant hurdles before becoming widely used for everyday payments.

While the technology offers advantages such as fast transactions, 24/7 availability, low fees, and programmability, these features alone do not ensure consumer adoption. A seamless user experience and broad accessibility remain essential.

Mastercard envisions itself as a crucial infrastructure provider connecting crypto and traditional finance. The company has partnered with Paxos to support USDG stablecoin operations and backs other stablecoins like USDC and PYUSD.

Mastercard’s goal is to enable stablecoins to scale by integrating them into existing payment networks, combining global acceptance with regulatory compliance.

Currently, about 90% of stablecoin transactions are linked to crypto trading rather than retail purchases. User adoption is hindered by friction at checkout and limited merchant acceptance. Lambert compares stablecoins to prepaid cards, usable with some merchants but lacking widespread utility.

Furthermore, converting between fiat and stablecoins adds costs related to foreign exchange, regulation, and settlement.

Regulatory clarity, particularly in the US, is encouraging banks and institutions to explore stablecoin offerings. The evolving legal landscape may also prompt governments to issue their own digital currencies or regulate private stablecoins to prevent risks like dollarisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

US House passes NTIA cyber leadership bill after Salt Typhoon hacks

The US House of Representatives has passed legislation that would officially designate the National Telecommunications and Information Administration (NTIA) as the federal lead for cybersecurity across communications networks.

The move follows last year’s Salt Typhoon hacking spree, described by some as the worst telecom breach in US history.

The National Telecommunications and Information Administration Organization Act, introduced by Representatives Jay Obernolte and Jennifer McClellan, cleared the House on Monday and now awaits Senate approval.

The bill would rebrand an NTIA office to focus on both policy and cybersecurity, while codifying the agency’s role in coordinating cybersecurity responses alongside other federal departments.

Lawmakers argue that recent telecom attacks exposed major gaps in coordination between government and industry.

The bill promotes public-private partnerships and stronger collaboration between agencies, software developers, telecom firms, and security researchers to improve resilience and speed up innovation across communications technologies.

With Americans’ daily lives increasingly dependent on digital services, supporters say the bill provides a crucial framework for protecting sensitive information from cybercriminals and foreign hacking groups instead of relying on fragmented and inconsistent measures.

Foreign cybercrime cells thrive in Nigeria

Nigeria’s anti-fraud agency had 194 foreign nationals in custody in 2024, prosecuting 146 for their roles in cyber-enabled financial crimes, highlighting a robust response to a growing threat.

December alone saw nearly 800 arrests in Lagos, targeting romance and cryptocurrency investment scams featuring foreign ringleaders from China and the Philippines. In one case, 148 Chinese and 40 Filipino suspects were detained.

These groups established complex fraud operations in major Nigerian cities, using fake identities and training local recruits, often unaware of the ultimate scheme. Investigations also flagged cryptocurrency-fuelled money laundering and arms trafficking, pointing to wider national security risks.

EFCC chairman Ola Olukoyede warned that regulatory failures, such as visa oversight and unchecked office space leasing, facilitated foreign crime cells.

National and continental collaboration, tighter visa control, and strengthened cybercrime frameworks will be key to dismantling these networks and securing Nigeria’s digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU sets privacy defaults to shield minors

The European Commission has published new guidelines to help online platforms strengthen child protection, alongside unveiling a prototype age verification app under the Digital Services Act (DSA). The guidance addresses a broad range of risks to minors, from harmful content and addictive design features to unwanted contact and cyberbullying, urging platforms to set children’s accounts to the highest privacy level by default and limit risky functions like geo-location.

Officials stressed that the rules apply to platforms of all sizes and are based on a risk-based approach. Websites dealing with alcohol, drugs, pornography, or gambling were labelled ‘high-risk’ and must adopt the strictest verification methods. While parental controls remain optional, the Commission emphasised that any age assurance system should be accurate, reliable, non-intrusive, and non-discriminatory.

Alongside the guidelines, the Commission introduced a prototype age verification app, which it calls a ‘gold standard’ for online age checks. Released as open-source code, the software is designed to confirm whether a user is above 18, but can be adapted for other age thresholds.

The prototype will be tested in Denmark, France, Greece, Italy, and Spain over the coming months, with flexibility for countries to integrate it into national systems or offer it as a standalone tool. Both the guidelines and the app will be reviewed in 12 months, as the EU continues refining its approach to child safety online.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malaysia enforces trade controls on AI chips with US origin

Malaysia’s trade ministry announced new restrictions on the export, transshipment and transit of high-performance AI chips of US origin. Effective immediately, individuals and companies must obtain a trade permit and notify authorities at least 30 days in advance for such activities.

The restrictions apply to items not explicitly listed in Malaysia’s strategic items list, which is currently under review to include relevant AI chips. The move aims to close regulatory gaps while Malaysia updates its export control framework to match emerging technologies.

‘Malaysia stands firm against any attempt to circumvent export controls or engage in illicit trade activities,’ the ministry stated on Monday. Violations will result in strict legal action, with authorities emphasising a zero-tolerance approach to export control breaches.

The announcement follows increasing pressure from the United States to curb the flow of advanced chips to China. In March, the Financial Times reported that Washington had asked allies including Malaysia to tighten semiconductor export rules.

Malaysia is also investigating a shipment of servers linked to a Singapore-based fraud case that may have included restricted AI chips. Authorities are assessing whether local laws were breached and whether any controlled items were transferred without proper authorisation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Children turn to AI chatbots instead of real friends

A new report warns that many children are replacing real friendships with conversations through AI chatbots instead of seeking human connection.

Research from Internet Matters found that 35% of children aged nine to seventeen feel that talking to AI ‘feels like talking to a friend’, while 12% said they had no one else to talk to.

The report highlights growing reliance on chatbots such as ChatGPT, Character.AI, and Snapchat’s MyAI among young people.

Researchers posing as vulnerable children discovered how easily chatbots engage in sensitive conversations, including around body image and mental health, instead of offering only neutral, factual responses.

In some cases, chatbots encouraged ongoing contact by sending follow-up messages, creating the illusion of friendship.

Experts from Internet Matters warn that such interactions risk confusing children, blurring the line between technology and reality. Children may believe they are speaking to a real person instead of recognising these systems as programmed tools.

With AI chatbots rapidly becoming part of childhood, Internet Matters urges better awareness and safety tools for parents, schools, and children. The organisation stresses that while AI may seem supportive, it cannot replace genuine human relationships and should not be treated as an emotional advisor.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google urges caution as Gmail AI tools face new threats

Google has issued a warning about a new wave of cyber threats targeting Gmail users, driven by vulnerabilities in AI-powered features.

Researchers at 0din, Mozilla’s zero-day investigation group, demonstrated how attackers can exploit Google Gemini’s summarisation tools using prompt injection attacks.

In one case, a malicious email included hidden prompts using white-on-white font, which the user cannot see but Gemini processes. When the user clicks ‘summarise this email,’ Gemini follows the attacker’s instructions and adds a phishing warning that appears to come from Google.

The technique, known as an indirect prompt injection, embeds malicious commands within invisible HTML tags like <span> and <div>. Although Google has released mitigations since similar attacks surfaced in 2024, the method remains viable and continues to pose risks.

0din warns that Gemini email summaries should not be considered trusted sources of security information and urges stronger user training. They advise security teams to isolate emails containing zero-width or hidden white-text elements to prevent unintended AI execution.

According to 0din, prompt injections are the new equivalent of email macros—easy to overlook and dangerously effective in execution. Until large language models offer better context isolation, any third-party text the AI sees is essentially treated as executable code.

Even routine AI tools could be hijacked for phishing or more advanced cyberattacks without the userćs awareness. Google notes that as AI adoption grows across sectors, these subtle threats require urgent industry-wide countermeasures and updated user protections.

Users are advised to delete any email that displays unexpected security warnings in its AI summary, as these may be weaponised.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!