OpenAI and Shopify explore product sales via ChatGPT

OpenAI is preparing to take a commission from product sales made directly through ChatGPT, signalling a significant shift in its business model. The move aims to monetise free users by embedding e-commerce checkout within the chatbot.

Currently, ChatGPT provides product links that redirect users to external sites. In April, OpenAI partnered with Shopify to support this feature. Sources say the next step is enabling purchases without leaving the platform, with merchants paying OpenAI a fee per transaction.

Until now, OpenAI has earned revenue mainly from ChatGPT Plus subscriptions and enterprise deals. Despite a $300 billion valuation, the company remains loss-making and seeks new commercial avenues tied to its conversational AI tools.

E-commerce integration would also challenge Google’s grip on product discovery and paid search, as more users turn to chatbots for recommendations.

Early prototypes have been shown to brands, and financial terms are under discussion. Shopify, which powers checkout on platforms like TikTok, may also provide the backend infrastructure for ChatGPT.

Product suggestions in ChatGPT are generated based on query relevance and user-specific context, including budgets and saved preferences. With memory upgrades, the chatbot can personalise results more effectively over time.

Currently, clicking on a product shows a list of sellers based on third-party data. Rankings rely mainly on metadata rather than price or delivery speed, though this is expected to evolve.

Marketers are already experimenting with ‘AIO’ — AI optimisation — to boost visibility in AI-generated product listings, similar to SEO for search engines.

An advertising agency executive said this shift could disrupt paid search and traditional ad models. Concerns are growing around how AI handles preferences and the fairness of its recommendations.

OpenAI has previously said it had ‘no active plans to pursue advertising’. However, CFO Sarah Friar recently confirmed that the company is open to ads in the future, using a selective approach.

CEO Sam Altman said OpenAI would not accept payments for preferential placement, but may charge small affiliate fees on purchases made through ChatGPT.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fresh EU backlash over Digital Markets Act non-compliance

Meta is again under EU scrutiny after failing to fully comply with the bloc’s Digital Markets Act (DMA), despite a €200 million fine earlier this year.

The European Commission says Meta’s current ‘pay or consent’ model still falls short and could trigger further penalties. A formal warning is expected, with recurring fines likely if the company does not adjust its approach.

The DMA imposes strict rules on major tech platforms to reduce market dominance and protect digital fairness. While Meta claims its model meets legal standards, the Commission says progress has been minimal.

Over the past year, Meta has faced nearly €1 billion in EU fines, including €798 million for linking Facebook Marketplace to its central platform. The new case adds to years of tension over data practices and user consent.

The ‘pay or consent’ model offers users a choice between paying for privacy or accepting targeted ads. Regulators argue this does not meet the threshold for genuine consent and mirrors Meta’s past GDPR tactics.

Privacy advocates have long criticised Meta’s approach, saying users are left with no meaningful alternatives. Internal documents show Meta lobbied against privacy reforms and warned governments about reduced investment.

The Commission now holds greater power under the DMA than it did with GDPR, allowing for faster, centralised enforcement and fines of up to 10% of global turnover.

Apple has already been fined €500 million, and Google is also under investigation. The EU’s rapid action signals a stricter stance on platform accountability. The message for Meta and other tech giants is clear: partial compliance is no longer enough to avoid serious regulatory consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI system screens diabetic eye disease with near-perfect accuracy

A new AI programme is showing remarkable accuracy in detecting diabetic retinopathy, a leading cause of preventable blindness. The SMART system, short for Simple Mobile AI Retina Tracker, can scan retinal images using even basic smartphones and has achieved over 99% accuracy.

Researchers at the University of Texas Health Sciences Center in the US trained the AI using thousands of retinal images from diverse populations across six continents. The system processes images in under a second and can distinguish diabetic retinopathy from other eye diseases.

Experts say the technology could dramatically expand access to eye screenings, particularly in areas lacking specialist care. By integrating the tool into regular check-ups, both primary care providers and ophthalmologists could streamline early diagnosis.

Researchers highlighted that the tool’s mobile accessibility allows for global reach, potentially screening billions. The findings were presented at the annual meeting of The Endocrine Society, though they have yet to be peer-reviewed.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe’s quantum ambitions meet US private power and China’s state drive

Quantum computing could fundamentally reshape technology, using quantum bits (qubits) instead of classical bits. Qubits allow complex calculations beyond classical computing, transforming sectors from pharmaceuticals to defence.

Europe is investing billions in quantum technology, emphasising technological sovereignty. Yet, it competes fiercely with the United States, which enjoys substantial private investment, and China, powered by significant state-backed funding.

The UK began quantum initiatives early, launching the National Quantum Programme 2014. It recently pledged £2.5 billion more, supporting start-ups like Orca Computing and Universal Quantum, alongside nations like Canada, Israel, and Japan.

Europe accounted for eight of the nineteen quantum start-ups established globally in 2024, including IQM Quantum Computers and Pasqal. Despite Europe’s scientific strengths, it only captured 5% of global quantum investments, versus 50% for the US.

The European Commission aims to strengthen quantum capabilities by funding six chip factories and a continent-wide Quantum Skills Academy. However, attracting sufficient private investment remains a significant challenge.

The US quantum industry thrives, driven by giants such as IBM, Google, Microsoft, IonQ, Rigetti, and D-Wave Quantum. Recent breakthroughs include Microsoft’s topological qubit and Google’s Willow quantum chip.

D-Wave Quantum has demonstrated real-world quantum advantages, solving complex optimisation problems in minutes. Its technology is now used commercially in logistics, traffic management, and supply chains.

China, meanwhile, leads in state-driven quantum funding, investing $15 billion directly and managing a $138 billion tech venture fund. By contrast, US federal investment totals about $6 billion, underscoring China’s aggressive approach.

Global investment in quantum start-ups reached $1.25 billion in Q1 2025 alone, reflecting a shift towards practical applications. By 2040, the quantum market is projected to reach $173 billion, influencing global economics and geopolitics.

Quantum computing raises geopolitical concerns, prompting democratic nations to coordinate through bodies like the OECD and G7. Interoperability, trust, and secure infrastructure have become essential strategic considerations.

Europe’s quantum ambitions require sustained investment, standard-setting leadership, and robust supply chains. Its long-term technological independence hinges on moving swiftly beyond initial funding towards genuine strategic autonomy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Co-op CEO apologises after cyberattack hits 6.5 million members

Co-op CEO Shirine Khoury-Haq has confirmed that all 6.5 million members had their data stolen during a cyberattack in April.

‘I’m devastated that information was taken,’ Khoury-Haq told BBC Breakfast. ‘It hurt my members; they took their data, and it hurt our customers, whom I take personally.’

The stolen data included names, addresses, and contact details, but no financial or transaction information. Khoury-Haq said the incident felt ‘personal’ due to its impact on Co-op staff, adding that IT teams ‘fought off these criminals’ under immense pressure.

Although the hackers were removed from Co-op’s systems, the stolen information could not be recovered. The company monitored the breach and reported it to the authorities.

Co-op, which operates a membership profit-sharing model, is still working to restore its back-end systems. The financial impact has not been disclosed.

In response, Co-op is partnering with The Hacking Games — a cybersecurity recruitment initiative — to guide young talent towards legal tech careers. A pilot will launch in Co-op Academies Trust schools.

The breach was part of a wider wave of cyberattacks on UK retailers, including Marks & Spencer and Harrods. Four people aged 17 to 20 have been arrested concerning the incidents.

In a related case, Australian airline Qantas also confirmed a recent breach involving its frequent flyer programme. As with Co-op, financial data was not affected, but personal contact information was accessed.

Experts warn of increasingly sophisticated attacks on public and private institutions, calling for stronger digital defences and proactive cybersecurity strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Air Serbia suffers deep network compromise in July cyberattack

Air Serbia delayed issuing June payslips after a cyberattack disrupted internal systems, according to internal memos obtained by The Register. A 10 July note told staff: ‘Given the ongoing cyberattacks, for security reasons, we will postpone the distribution of June 2025 payslips.’

The IT department is reportedly working to restore operations, and payslips will be emailed once systems are secure again. Although salaries were paid, staff could not access their payslip PDFs due to the disruption.

HR warned employees not to open suspicious emails, particularly those appearing to contain payslips or that seemed self-addressed. ‘We kindly ask that you act responsibly given the current situation,’ said one memo.

Air Serbia first informed staff about the cyberattack on 4 July, with IT teams warning of possible disruptions to operations. Managers were instructed to activate business continuity plans and adapt workflows accordingly.

By 7 July, all service accounts had been shut down, and staff were subjected to company-wide password resets. Security-scanning software was installed on endpoints, and internet access was restricted to selected airserbia.com pages.

A new VPN client was deployed due to security vulnerabilities, and data centres were shifted to a demilitarised zone. On 11 July, staff were told to leave their PCs locked but running over the weekend for further IT intervention.

An insider told The Register that the attack resulted in a deep compromise of Air Serbia’s Active Directory environment. The source claims the attackers may have gained access in early July, although exact dates remain unclear due to missing logs.

Staff reportedly fear that the breach could have involved personal data, and that the airline may not disclose the incident publicly. According to the insider, attackers had been probing Air Serbia’s exposed endpoints since early 2024.

The airline also faced several DDoS attacks earlier this year, although the latest intrusion appears far more severe. Malware, possibly an infostealer, is suspected in the breach, but no ransom demands had been made as of 15 July.

Infostealers are often used in precursor attacks before ransomware is deployed, security experts warn. Neither Air Serbia nor the government of Serbia responded to media queries by the time of publication.

Air Serbia had a record-breaking year in 2024, carrying 4.4 million passengers — a 6 percent increase over the previous year. Cybersecurity experts recently warned of broader attacks on the aviation industry, with groups such as Scattered Spider under scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GPAI Code of Practice creates legal uncertainty for non-signatories

Lawyers at William Fry say the EU’s final Code of Practice for general-purpose AI (GPAI) models leaves key questions unanswered. GPAI systems include models such as OpenAI’s GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama, trained on vast datasets for broad applications.

The Code of Practice, released last week, addresses transparency, safety, security, and copyright, and is described by the European Commission as a voluntary tool. It was prepared by independent experts to help GPAI developers comply with upcoming legal obligations under the EU AI Act.

In a statement on the firm’s website, William Fry lawyers Barry Scannell and Leo Moore question how voluntary the code truly is. They note that signatories not in full compliance can still be seen as acting in good faith and will be supported rather than penalised.

A protected grace period runs until 2 August 2026, after which the AI Act could allow fines for non-compliance. The lawyers warn that this creates a two-tier system, shielding signatories while exposing non-signatories to immediate legal risk under the AI Act.

Developers who do not sign the code may face higher regulatory scrutiny, despite it being described as non-binding. William Fry also points out that detailed implementation guidelines and templates have not yet been published by the EU.

Additional guidance to clarify key GPAI concepts is expected later this month, but the current lack of detail creates uncertainty. The code’s copyright section, the lawyers argue, shows how the document has evolved into a quasi-regulatory framework.

An earlier draft required only reasonable efforts to avoid copyright-infringing sources. The final version demands the active exclusion of such sites. A proposed measure requiring developers to verify the source of copyrighted data acquired from third parties has been removed from the final draft.

The lawyers argue that this creates a practical blind spot, allowing unlawful content to slip into training data undetected. Rights holders still retain the ability to pursue action if they believe their content was misused, even if providers are signatories.

Meanwhile, the transparency chapter now outlines specific standards, rather than general principles. The safety and security section also sets enforceable expectations, increasing the operational burden on model developers.

William Fry warns that gaps between the code’s obligations and the missing technical documentation could have costly consequences. They conclude that, without the final training data template or implementation details, both developers and rights holders face compliance risks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!