New Zealand central bank warns of AI risks

The Reserve Bank of New Zealand has warned that the swift uptake of AI in the financial sector could pose a threat to financial stability.

A report released on Monday highlighted how errors in AI systems, data privacy breaches and potential market distortions might magnify existing vulnerabilities instead of simply streamlining operations.

The central bank also expressed concern over the increasing dependence on a handful of third-party AI providers, which could lead to market concentration instead of healthy competition.

A reliance like this, it said, could create new avenues for systemic risk and make the financial system more susceptible to cyber-attacks.

Despite the caution, the report acknowledged that AI is bringing tangible advantages, such as greater modelling accuracy, improved risk management and increased productivity. It also noted that AI could help strengthen cyber resilience rather than weaken it.

The analysis was published just ahead of the central bank’s twice-yearly Financial Stability Report, scheduled for release on Wednesday.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US lawmakers push for app store age checks

A new bill introduced by US lawmakers could force app stores like Apple’s App Store and Google Play to verify the age of all users, in a move aimed at increasing online safety for minors.

Known as the App Store Accountability Act, the legislation would require age categorisation and parental consent before minors can download apps or make in-app purchases. If passed, the law would apply to platforms with at least five million users and would come into effect one year after approval.

The bill proposes dividing users into age brackets — from ‘young child’ to ‘adult’ — and holding app stores accountable for enforcing access restrictions.

Lawmakers behind the bill, Republican Senator Mike Lee and Representative John James, argue that Big Tech companies must take responsibility for limiting children’s exposure to harmful content. They believe app stores are the right gatekeepers for verifying age and protecting minors online.

Privacy advocates and tech companies have voiced concern about the bill’s implications. Legal experts warn that verifying users’ ages may require sensitive personal data, such as ID documents or facial recognition scans, raising the risk of data misuse.

Apple said such verification would apply to all users, not just children, and criticised the idea as counterproductive to privacy.

The proposal has widened a rift between app store operators and social media platforms. While Meta, X, and Snap back centralised age checks at the app store level, Apple and Google accuse them of shifting the burden of responsibility.

Both tech giants emphasise the importance of shared responsibility and continue to engage with lawmakers on crafting practical and privacy-conscious solutions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok faces a €530 million EU record fine over data concerns

TikTok has been handed a €530 million ($600 million) fine by Ireland’s Data Protection Commissioner (DPC) over data privacy violations involving user information transfers to China. 

The EU privacy watchdog highlighted that TikTok failed to ensure that the EU citizens’ data received sufficient protection against potential access by Chinese authorities, raising concerns among EU lawmakers.

The regulator has also set a tight six-month deadline for TikTok to align its data practices with the EU standards. If the platform cannot demonstrate compliance, particularly in safeguarding the EU user information from being accessed remotely by China-based employees, it could face a suspension of data transfers entirely.

TikTok strongly opposes the ruling, asserting it has consistently adhered to EU-approved frameworks that restrict and monitor data access. The platform also highlighted recent security enhancements, including dedicated EU and US data centres, as proof of its commitment. 

TikTok claims it has never received or complied with any request for the EU user data from Chinese authorities, framing the ruling as an overly strict measure that could disrupt broader industry practices.

However, the regulator revealed new concerns following TikTok’s recent disclosure that some EU user data had been inadvertently stored on servers in China, although subsequently deleted. 

The revelation prompted Ireland’s privacy watchdog to consider additional regulatory actions, underscoring its serious concerns about TikTok’s overall transparency of data handling.

The case represents the second major privacy reprimand against TikTok in recent years, following a €345 million fine in 2023 over mishandling children’s data. It also marks the DPC’s pattern of taking tough actions against global tech companies headquartered in Ireland, as it aims to enforce compliance strictly under the EU’s rigorous General Data Protection Regulation (GDPR).

Google admits using opted-out content for AI training

Google has admitted in court that it can use website content to train AI features in its search products, even when publishers have opted out of such training.

Although Google offers a way for sites to block their data from being used by its AI lab, DeepMind, the company confirmed that its broader search division can still use that data for AI-powered tools like AI Overviews.

An initiative like this has raised concern among publishers who seek reduced traffic as Google’s AI summarises answers directly at the top of search results, diverting users from clicking through to original sources.

Eli Collins, a vice-president at Google DeepMind, acknowledged during a Washington antitrust trial that Google’s search team could train AI using data from websites that had explicitly opted out.

The only way for publishers to fully prevent their content from being used in this way is by opting out of being indexed by Google Search altogether—something that would effectively make them invisible on the web.

Google’s approach relies on the robots.txt file, a standard that tells search bots whether they are allowed to crawl a site.

The trial is part of a broader effort by the US Department of Justice to address Google’s dominance in the search market, which a judge previously ruled had been unlawfully maintained.

The DOJ is now asking the court to impose major changes, including forcing Google to sell its Chrome browser and stop paying to be the default search engine on other devices. These changes would also apply to Google’s AI products, which the DOJ argues benefit from its monopoly.

Testimony also revealed internal discussions at Google about how using extensive search data, such as user session logs and search rankings, could significantly enhance its AI models.

Although no model was confirmed to have been built using that data, court documents showed that top executives like DeepMind CEO Demis Hassabis had expressed interest in doing so.

Google’s lawyers have argued that competitors in AI remain strong, with many relying on direct data partnerships instead of web scraping.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyber incident disrupts services at Marks & Spencer

Marks & Spencer has confirmed that a cyberattack has disrupted food availability in some stores and forced the temporary shutdown of online services. The company has not officially confirmed the nature of the breach, but cybersecurity experts suspect a ransomware attack.

The retailer paused clothing and home orders on its website and app after issues arose over the Easter weekend, affecting contactless payments and click-and-collect systems. M&S said it took some systems offline as a precautionary measure.

Reports have linked the incident to the hacking group Scattered Spider, although M&S has declined to comment further or provide a timeline for the resumption of online orders. The disruption has already led to minor product shortages and analysts anticipate a short-term hit to profits.

Still, M&S’s food division had been performing strongly, with grocery spending rising 14.4% year-on-year, according to Kantar. The retailer, which operates around 1,000 UK stores, earns about one-third of its non-food sales online. Shares dropped earlier in the week but closed Tuesday slightly up.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta introduces face recognition to help UAE users recover hacked accounts

Meta is introducing facial recognition tools to help UAE users recover hacked accounts on Facebook and Instagram and stop scams that misuse public figures’ images. The technology compares suspicious ads to verified profile photos and removes them automatically if a match is found.

Well-known individuals in the region are automatically enrolled in the programme but can opt out if they choose. A new video selfie feature has also been rolled out to help users regain access to compromised accounts.

This allows identity verification through a short video matched with existing profile photos, offering a faster and more secure alternative to document-based checks.

Meta confirmed that all facial data used for verification is encrypted, deleted immediately after use, and never repurposed.

The company says this is part of a broader effort to fight impersonation scams and protect both public figures and regular users, not just in the UAE but elsewhere too.

Meta’s regional director highlighted the emotional and financial harm such scams can cause, reinforcing the need for proactive defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France accuses Russia of cyberattacks on Olympic and election targets

France has publicly accused Russia’s military intelligence agency of launching cyberattacks against key French institutions, including the 2017 presidential campaign of Emmanuel Macron and organisations tied to the Paris 2024 Olympics.

The allegations were presented by Foreign Minister Jean-Noël Barrot at the UN Security Council, where he condemned the attacks as violations of international norms. French authorities linked the operations to APT28, a well-known Russian hacking group connected to the GRU.

The group also allegedly orchestrated the 2015 cyberattack on TV5 Monde and attempted to manipulate voters during the 2017 French election by leaking thousands of campaign documents. A rise in attacks has been noted ahead of major events like the Olympics and future elections.

France’s national cybersecurity agency recorded a 15% increase in Russia-linked attacks in 2024, targeting ministries, defence firms, and cultural venues. French officials warn the hacks aim to destabilise society and erode public trust.

France plans closer cooperation with Poland and pledged to counter Russia’s cyber operations with all available means.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft Recall raises privacy alarm again

Fresh concerns are mounting over privacy risks after Microsoft confirmed the return of its controversial Recall feature for Copilot+ PCs. Recall takes continuous screenshots of everything on a Windows user’s screen and stores it in a searchable database powered by AI.

Although screenshots are saved locally and protected by a PIN, experts warn the system undermines the security of encrypted apps like WhatsApp and Signal by storing anything shown on screen, even if it was meant to disappear.

Critics argue that even users who have not enabled Recall could have their private messages captured if someone they are chatting with has the feature switched on.

Cybersecurity experts have already demonstrated that guessing the PIN gives full access to all screen content—deleted or not—including sensitive conversations, images, and passwords.

With no automatic warning or opt-out for people being recorded, concerns are growing that secure communication is being eroded by stealth.

At the same time, Meta has revealed new AI tools for WhatsApp that can summarise chats and suggest replies. Although the company insists its ‘Private Processing’ feature will ensure security, experts are questioning why secure messaging platforms need AI integrations at all.

Even if WhatsApp’s AI remains private, Microsoft Recall could still quietly record and store messages, creating a privacy paradox that many users may not fully understand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK refuses to include Online Safety Act in US trade talks

The UK government has ruled out watering down the Online Safety Act as part of any trade negotiations with the US, despite pressure from American tech giants.

Speaking to MPs on the Science, Innovation and Technology Committee, Baroness Jones of Whitchurch, the parliamentary under-secretary for online safety, stated unequivocally that the legislation was ‘not up for negotiation’.

‘There have been clear instructions from the Prime Minister,’ she said. ‘The Online Safety Act is not part of the trade deal discussions. It’s a piece of legislation — it can’t just be negotiated away.’

Reports had suggested that President Donald Trump’s administration might seek to make loosening the UK’s online safety rules a condition of a post-Brexit trade agreement, following lobbying from large US-based technology firms.

However, Baroness Jones said the legislation was well into its implementation phase and that ministers were ‘happy to reassure everybody’ that the government is sticking to it.

The Online Safety Act will require tech platforms that host user-generated content, such as social media firms, to take active steps to protect users — especially children — from harmful and illegal content.

Non-compliant companies may face fines of up to £18 million or 10% of global turnover, whichever is greater. In extreme cases, platforms could be blocked from operating in the UK.

Mark Bunting, a representative of Ofcom, which is overseeing enforcement of the new rules, said the regulator would have taken action had the legislation been in force during last summer’s riots in Southport, which were exacerbated by online misinformation.

His comments contrasted with tech firms including Meta, TikTok and X, which claimed in earlier hearings that little would have changed under the new rules.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI’s CEO Altman confirms rollback of GPT-4o after criticism

OpenAI has reversed a recent update to its GPT-4o model after users complained it had become overly flattering and blindly agreeable. The behaviour, widely mocked online, saw ChatGPT praising dangerous or clearly misguided user ideas, leading to concerns over the model’s reliability and integrity.

The change had been part of a broader attempt to make GPT-4o’s default personality feel more ‘intuitive and effective’. However, OpenAI admitted the update relied too heavily on short-term user feedback and failed to consider how interactions evolve over time.

In a blog post published Tuesday, OpenAI said the model began producing responses that were ‘overly supportive but disingenuous’. The company acknowledged that sycophantic interactions could feel ‘uncomfortable, unsettling, and cause distress’.

Following CEO Sam Altman’s weekend announcement of an impending rollback, OpenAI confirmed that the previous, more balanced version of GPT-4o had been reinstated.

It also outlined steps to avoid similar problems in future, including refining model training, revising system prompts, and expanding safety guardrails to improve honesty and transparency.

Further changes in development include real-time feedback mechanisms and allowing users to choose between multiple ChatGPT personalities. OpenAI says it aims to incorporate more diverse cultural perspectives and give users greater control over the assistant’s behaviour.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!