OpenAI and Shopify explore product sales via ChatGPT

OpenAI is preparing to take a commission from product sales made directly through ChatGPT, signalling a significant shift in its business model. The move aims to monetise free users by embedding e-commerce checkout within the chatbot.

Currently, ChatGPT provides product links that redirect users to external sites. In April, OpenAI partnered with Shopify to support this feature. Sources say the next step is enabling purchases without leaving the platform, with merchants paying OpenAI a fee per transaction.

Until now, OpenAI has earned revenue mainly from ChatGPT Plus subscriptions and enterprise deals. Despite a $300 billion valuation, the company remains loss-making and seeks new commercial avenues tied to its conversational AI tools.

E-commerce integration would also challenge Google’s grip on product discovery and paid search, as more users turn to chatbots for recommendations.

Early prototypes have been shown to brands, and financial terms are under discussion. Shopify, which powers checkout on platforms like TikTok, may also provide the backend infrastructure for ChatGPT.

Product suggestions in ChatGPT are generated based on query relevance and user-specific context, including budgets and saved preferences. With memory upgrades, the chatbot can personalise results more effectively over time.

Currently, clicking on a product shows a list of sellers based on third-party data. Rankings rely mainly on metadata rather than price or delivery speed, though this is expected to evolve.

Marketers are already experimenting with ‘AIO’ — AI optimisation — to boost visibility in AI-generated product listings, similar to SEO for search engines.

An advertising agency executive said this shift could disrupt paid search and traditional ad models. Concerns are growing around how AI handles preferences and the fairness of its recommendations.

OpenAI has previously said it had ‘no active plans to pursue advertising’. However, CFO Sarah Friar recently confirmed that the company is open to ads in the future, using a selective approach.

CEO Sam Altman said OpenAI would not accept payments for preferential placement, but may charge small affiliate fees on purchases made through ChatGPT.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces fresh EU backlash over Digital Markets Act non-compliance

Meta is again under EU scrutiny after failing to fully comply with the bloc’s Digital Markets Act (DMA), despite a €200 million fine earlier this year.

The European Commission says Meta’s current ‘pay or consent’ model still falls short and could trigger further penalties. A formal warning is expected, with recurring fines likely if the company does not adjust its approach.

The DMA imposes strict rules on major tech platforms to reduce market dominance and protect digital fairness. While Meta claims its model meets legal standards, the Commission says progress has been minimal.

Over the past year, Meta has faced nearly €1 billion in EU fines, including €798 million for linking Facebook Marketplace to its central platform. The new case adds to years of tension over data practices and user consent.

The ‘pay or consent’ model offers users a choice between paying for privacy or accepting targeted ads. Regulators argue this does not meet the threshold for genuine consent and mirrors Meta’s past GDPR tactics.

Privacy advocates have long criticised Meta’s approach, saying users are left with no meaningful alternatives. Internal documents show Meta lobbied against privacy reforms and warned governments about reduced investment.

The Commission now holds greater power under the DMA than it did with GDPR, allowing for faster, centralised enforcement and fines of up to 10% of global turnover.

Apple has already been fined €500 million, and Google is also under investigation. The EU’s rapid action signals a stricter stance on platform accountability. The message for Meta and other tech giants is clear: partial compliance is no longer enough to avoid serious regulatory consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI tool uses walking patterns to detect early signs of dementia

Fujitsu and Acer Medical are trialling an AI-powered tool to help identify early signs of dementia and Parkinson’s disease by analysing patients’ walking patterns. The system, called aiGait and powered by Fujitsu’s Uvance skeleton recognition technology, converts routine movements into health data.

Initial tests are taking place at a daycare centre linked to Taipei Veterans Hospital, using tablets and smartphones to record basic patient movements. The AI compares this footage with known movement patterns associated with neurodegenerative conditions, helping caregivers detect subtle abnormalities.

The tool is designed to support early intervention, with abnormal results prompting follow-up by healthcare professionals. Acer Medical plans to expand the service to elderly care centres across Taiwan by the end of the year.

Fujitsu’s AI was originally developed for gymnastics scoring and adapted to analyse real-world gait data with high accuracy using everyday mobile devices. Both companies hope to extend the technology’s use to paediatrics, sports science, and rehabilitation in future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

China deploys new malware tool for border phone searches

Chinese authorities reportedly use a powerful new malware tool called Massistant to extract data from seized Android phones. Developed by Xiamen Meiya Pico, the tool enables police to access messages, photos, locations, and app data once they have physical access to a device.

Cybersecurity firm Lookout revealed that Massistant operates via a desktop-connected tower, requiring unlocked devices but no advanced hacking techniques. Researchers said affected users include Chinese citizens and international travellers whose phones may be searched at borders.

The malware leaves traces on compromised phones, allowing for post-infection removal, but authorities already have the data by then. Forums in China have shown increasing user complaints about malware following police interactions.

Massistant is seen as the successor to an older tool, MSSocket, with Meiya Pico now controlling 40% of China’s digital forensics market. They previously sanctioned the firm for its surveillance tech links to the Chinese government’s use.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Appreciation Day highlights progress and growing concerns

AI is marking another milestone as experts worldwide reflect on its rapid rise during AI Appreciation Day. From reshaping business workflows to transforming customer experiences, AI’s presence is expanding — but so are concerns over its long-term implications.

Industry leaders point to AI’s growing role across sectors. Patrick Harrington from MetaRouter highlights how control over first-party data is now seen as key instead of just processing large datasets.

Vall Herard of Saifr adds that successful AI implementations depend on combining curated data with human oversight rather than relying purely on machine-driven systems.

Meanwhile, Paula Felstead from HBX Group believes AI could significantly enhance travel experiences, though scaling it across entire organisations remains a challenge.

Voice AI is changing industries that depend on customer interaction, according to Natalie Rutgers from Deepgram. Instead of complex interfaces, voice technology is improving communication in restaurants, hospitals, and banks.

At the same time, experts like Ivan Novikov from Wallarm stress the importance of securing AI systems and the APIs connecting them, as these form the backbone of modern AI services.

While some celebrate AI’s advances, others raise caution. SentinelOne’s Ezzeldin Hussein envisions AI becoming a trusted partner through responsible development rather than unchecked growth.

Naomi Buckwalter from Contrast Security warns that AI-generated code could open security gaps instead of fully replacing human engineering, while Geoff Burke from Object First notes that AI-powered cyberattacks are becoming inevitable for businesses unable to keep pace with evolving threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Co-op CEO apologises after cyberattack hits 6.5 million members

Co-op CEO Shirine Khoury-Haq has confirmed that all 6.5 million members had their data stolen during a cyberattack in April.

‘I’m devastated that information was taken,’ Khoury-Haq told BBC Breakfast. ‘It hurt my members; they took their data, and it hurt our customers, whom I take personally.’

The stolen data included names, addresses, and contact details, but no financial or transaction information. Khoury-Haq said the incident felt ‘personal’ due to its impact on Co-op staff, adding that IT teams ‘fought off these criminals’ under immense pressure.

Although the hackers were removed from Co-op’s systems, the stolen information could not be recovered. The company monitored the breach and reported it to the authorities.

Co-op, which operates a membership profit-sharing model, is still working to restore its back-end systems. The financial impact has not been disclosed.

In response, Co-op is partnering with The Hacking Games — a cybersecurity recruitment initiative — to guide young talent towards legal tech careers. A pilot will launch in Co-op Academies Trust schools.

The breach was part of a wider wave of cyberattacks on UK retailers, including Marks & Spencer and Harrods. Four people aged 17 to 20 have been arrested concerning the incidents.

In a related case, Australian airline Qantas also confirmed a recent breach involving its frequent flyer programme. As with Co-op, financial data was not affected, but personal contact information was accessed.

Experts warn of increasingly sophisticated attacks on public and private institutions, calling for stronger digital defences and proactive cybersecurity strategies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Air Serbia suffers deep network compromise in July cyberattack

Air Serbia delayed issuing June payslips after a cyberattack disrupted internal systems, according to internal memos obtained by The Register. A 10 July note told staff: ‘Given the ongoing cyberattacks, for security reasons, we will postpone the distribution of June 2025 payslips.’

The IT department is reportedly working to restore operations, and payslips will be emailed once systems are secure again. Although salaries were paid, staff could not access their payslip PDFs due to the disruption.

HR warned employees not to open suspicious emails, particularly those appearing to contain payslips or that seemed self-addressed. ‘We kindly ask that you act responsibly given the current situation,’ said one memo.

Air Serbia first informed staff about the cyberattack on 4 July, with IT teams warning of possible disruptions to operations. Managers were instructed to activate business continuity plans and adapt workflows accordingly.

By 7 July, all service accounts had been shut down, and staff were subjected to company-wide password resets. Security-scanning software was installed on endpoints, and internet access was restricted to selected airserbia.com pages.

A new VPN client was deployed due to security vulnerabilities, and data centres were shifted to a demilitarised zone. On 11 July, staff were told to leave their PCs locked but running over the weekend for further IT intervention.

An insider told The Register that the attack resulted in a deep compromise of Air Serbia’s Active Directory environment. The source claims the attackers may have gained access in early July, although exact dates remain unclear due to missing logs.

Staff reportedly fear that the breach could have involved personal data, and that the airline may not disclose the incident publicly. According to the insider, attackers had been probing Air Serbia’s exposed endpoints since early 2024.

The airline also faced several DDoS attacks earlier this year, although the latest intrusion appears far more severe. Malware, possibly an infostealer, is suspected in the breach, but no ransom demands had been made as of 15 July.

Infostealers are often used in precursor attacks before ransomware is deployed, security experts warn. Neither Air Serbia nor the government of Serbia responded to media queries by the time of publication.

Air Serbia had a record-breaking year in 2024, carrying 4.4 million passengers — a 6 percent increase over the previous year. Cybersecurity experts recently warned of broader attacks on the aviation industry, with groups such as Scattered Spider under scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hungary enforces prison terms for unauthorised crypto trading

Hungary has introduced strict penalties for individuals and companies involved in unauthorised cryptocurrency trading or services. Under the updated Criminal Code, using unauthorised crypto exchanges can lead to two years in prison, with longer terms for larger trades.

Crypto service providers operating without authorisation face even harsher penalties. Sentences can reach up to eight years for transactions exceeding 500 million forints (around $1.46 million).

The updated law defines new offences such as ‘abuse of crypto-assets’, aiming to impose stricter control over the sector.

The implementation has caused confusion among crypto companies, with Hungary’s Supervisory Authority for Regulatory Affairs (SZTFH) yet to publish compliance guidelines. Businesses now face a 60-day regulatory vacuum with no clear direction.

UK fintech firm Revolut responded by briefly halting crypto services in Hungary, citing the new legislation. It has since reinstated crypto withdrawals, while its EU entity works towards securing a regional crypto licence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple accused of blocking real browser competition on iOS

Developers and open web advocates say Apple continues to restrict rival browser engines on iOS, despite obligations under the EU’s Digital Markets Act. While Apple claims to allow competition, groups like Open Web Advocacy argue that technical and logistical hurdles still block real implementation.

The controversy centres on Apple’s refusal to allow developers to release region-specific browser versions or test new engines outside the EU. Developers must abandon global apps or persuade users to switch manually to new EU-only versions, creating friction and reducing reach.

Apple insists it upholds security and privacy standards built over 18 years and claims its new framework enables third-party browsers. However, critics say those browsers cannot be tested or deployed realistically without access for developers outside the EU.

The EU held a DMA compliance workshop in Brussels in June, during which tensions surfaced between Apple’s legal team and advocates. Apple says it is still transitioning and working with firms like Mozilla and Google on limited testing updates, but has offered no timeline for broader changes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!