New Apple AI model uses private email comparisons

Apple has outlined a new approach to improving its AI features by privately analysing user data with the help of synthetic data. The move follows criticism of the company’s AI products, especially notification summaries, which have underperformed compared to competitors.

The new method relies on ‘differential privacy,’ where Apple generates synthetic messages that resemble real user data without containing any actual content.

These messages are used to create embeddings—abstract representations of message characteristics—which are then compared with real emails on user’ devices that have opted in to share analytics.

Devices send back signals indicating which synthetic data most closely matches real content, without sharing the actual messages with Apple.

Apple said the technique is already being used to improve its Genmoji models and will soon be applied to other features, including Image Playground, Image Wand, Memories Creation, Writing Tools, and Visual Intelligence.

The company also confirmed plans to improve email summaries using the same privacy-focused method, aiming to refine its AI tools while maintaining a strong commitment to user data protection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI updates safety rules amid AI race

OpenAI has updated its Preparedness Framework, the internal system used to assess AI model safety and determine necessary safeguards during development.

The company now says it may adjust its safety standards if a rival AI lab releases a ‘high-risk’ system without similar protections, a move that reflects growing competitive pressure in the AI industry.

Instead of outright dismissing such flexibility, OpenAI insists that any changes would be made cautiously and with public transparency.

Critics argue OpenAI is already lowering its standards for the sake of faster deployment. Twelve former employees recently supported a legal case against the company, warning that a planned corporate restructure might encourage further shortcuts.

OpenAI denies these claims, but reports suggest compressed safety testing timelines and increasing reliance on automated evaluations instead of human-led reviews. According to sources, some safety checks are also run on earlier versions of models, not the final ones released to users.

The refreshed framework also changes how OpenAI defines and manages risk. Models are now classified as having either ‘high’ or ‘critical’ capability, the former referring to systems that could amplify harm, the latter to those introducing entirely new risks.

Instead of deploying models first and assessing risk later, OpenAI says it will apply safeguards during both development and release, particularly for models capable of evading shutdown, hiding their abilities, or self-replicating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Opera brings AI assistant to Opera Mini on Android

Opera, the Norway-based browser maker, has announced the rollout of its AI assistant, Aria, to Opera Mini users on Android. The move represents a strategic effort to bring advanced AI capabilities to users with low-end devices and limited data access, rather than confining such tools to high-spec platforms.

Aria allows users to access up-to-date information, generate images, and learn about a range of topics using a blend of models from OpenAI and Google.

Since its 2005 launch, Opera Mini has been known for saving data during browsing, and Opera claims that the inclusion of Aria won’t compromise that advantage nor increase the app’s size.

It makes the AI assistant more accessible for users in regions where data efficiency is critical, instead of making them choose between smart features and performance.

Opera has long partnered with telecom providers in Africa to offer free data to Opera Mini users. However, last year, it had to end its programme in Kenya due to regulatory restrictions around ads on browser bookmark tiles.

Despite such challenges, Opera Mini has surpassed a billion downloads on Android and now serves more than 100 million users globally.

Alongside this update, Opera continues testing new AI functions, including features that let users manage tabs using natural language and tools that assist with task completion.

An effort like this reflects the company’s ambition to embed AI more deeply into everyday browsing instead of limiting innovation to its main browser.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify launches Ads Exchange and Gen AI ads in India

Spotify has introduced its Ads Exchange (SAX) and Generative AI-powered advertisements in India, following a successful pilot in the US and Canada.

The SAX platform aims to give advertisers better control over performance tracking and maximise reach without overloading users with repetitive ads.

Integrated with platforms such as Google DV360, The Trade Desk, and Magnite, SAX enables advertisers to access Spotify’s high-quality inventory and enhance their programmatic strategies. In addition to multimedia formats, podcast ads will soon be included.

Through Generative AI, advertisers can create audio ads within Spotify’s Ads Manager platform at no extra cost, using scripts, voiceovers, and licensed music.

An innovation like this allows brands to produce more ads in shorter intervals with less effort, making the process quicker and more efficient for reaching a broader audience. Arjun Kolady, Head of Sales – India at Spotify, highlighted the ease of scaling campaigns with these new tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use EU user data for AI training amid scrutiny

Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.

The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.

Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.

Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.

The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.

Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.

Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X faces EU probe over AI data use

Elon Musk’s X platform is under formal investigation by the Irish Data Protection Commission over its alleged use of public posts from EU users to train the Grok AI chatbot.

The probe is centred on whether X Internet Unlimited Company, the platform’s newly renamed Irish entity, has adhered to key GDPR principles while sharing publicly accessible data, like posts and interactions, with its affiliate xAI, which develops the chatbot.

Concerns have grown over the lack of explicit user consent, especially as other tech giants such as Meta signal similar data usage plans.

A move like this is part of a wider regulatory push in the EU to hold AI developers accountable instead of allowing unchecked experimentation. Experts note that many AI firms have deployed tools under a ‘build first, ask later’ mindset, an approach at odds with Europe’s strict data laws.

Should regulators conclude that public data still requires user consent, it could force a dramatic shift in how AI models are developed, not just in Europe but around the world.

Enterprises are now treading carefully. The investigation into X is already affecting AI adoption across the continent, with legal and reputational risks weighing heavily on decision-makers.

In one case, a Nordic bank halted its AI rollout midstream after its legal team couldn’t confirm whether European data had been used without proper disclosure. Instead of pushing ahead, the project was rebuilt using fully documented, EU-based training data.

The consequences could stretch far beyond the EU. Ireland’s probe might become a global benchmark for how governments view user consent in the age of data scraping and machine learning.

Instead of enforcement being region-specific, this investigation could inspire similar actions from regulators in places like Singapore and Canada. As AI continues to evolve, companies may have no choice but to adopt more transparent practices or face a rising tide of legal scrutiny.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE experts warn on AI privacy risks in art apps

A surge in AI applications transforming selfies into Studio Ghibli-style artwork has captivated social media, but UAE cybersecurity experts are raising concerns over privacy and data misuse.

Dr Mohamed Al Kuwaiti, Head of Cybersecurity for the UAE Government, warned that engaging with unofficial apps could lead to breaches or leaks of personal data. He emphasised that while AI’s benefits are clear, users must understand how their personal data is handled by these platforms.

He called for strong cybersecurity standards across all digital platforms, urging individuals to be more cautious with their data.

Media professionals are also sounding alarms. Adel Al-Rashed, an Emirati journalist, cautioned that free apps often mimic trusted platforms but could exploit user data. He advised users to stick to verified applications, noting that paid services, like ChatGPT’s Pro edition, offer stronger privacy protections.

While acknowledging the risks, social media influencer Ibrahim Al-Thahli highlighted the excitement AI brings to creative expression. He urged users to focus on education and safe engagement with the technology, underscoring the UAE’s goal to build a resilient digital economy.

For more information on these topics, visit diplomacy.edu.

Hackers leak data from Indian software firm in major breach

A major cybersecurity breach has reportedly compromised a software company based in India, with hackers claiming responsibility for stealing nearly 1.6 million rows of sensitive data on 19 December 2024.

A hacker identified as @303 is said to have accessed and exposed customer information and internal credentials, with the dataset later appearing on a dark web forum via a user known as ‘frog’.

The leaked data includes email addresses linked to major Indian insurance providers, contact numbers, and possible administrative access credentials.

Analysts found that the sample files feature information tied to employees of companies such as HDFC Ergo, Bajaj Allianz, and ICICI Lombard, suggesting widespread exposure across the sector.

Despite the firm’s stated dedication to safeguarding data, the incident raises doubts about its cybersecurity protocols.

The breach also comes as India’s insurance regulator, IRDAI, has begun enforcing stricter cyber measures. In March 2025, it instructed insurers to appoint forensic auditors in advance and perform full IT audits instead of waiting for threats to surface.

A breach like this follows a string of high-profile incidents, including the Star Health Insurance leak affecting 31 million customers.

With cyberattacks in India up by 261% in early 2024 and the average cost of a breach now ₹19.5 crore, experts warn that insurance firms must adopt stronger protections instead of relying on outdated defences.

For more information on these topics, visit diplomacy.edu.

AI site faces backlash for copying Southern Oregon news

A major publishing organisation has issued a formal warning to Good Daily News, an AI-powered news aggregator, demanding it cease the unauthorised scraping of content from local news outlets across Southern Oregon and beyond. The News Media Alliance, which represents 2,200 publishers, sent the letter on 25 March, urging the national operator to respect publishers’ rights and stop reproducing material without permission.

Good Daily runs over 350 online ‘local’ news websites across 47 US states, including Daily Medford and Daily Salem in Oregon. Though the platforms appear locally based, they are developed using AI and managed by one individual, Matt Henderson, who has registered mailing addresses in both Ashland, Oregon and Austin, Texas. Content is reportedly scraped from legitimate local news sites, rewritten by AI, and shared in newsletters, sometimes with source links, but often without permission.

News Media Alliance president Danielle Coffey said such practices undermine the time, resources, and revenue of local journalism. Many publishers use digital tools to block automated scrapers, though this comes at a financial cost. The organisation is working with the Oregon Newspaper Publishers Association and exploring legal options. Others in the industry, including Heidi Wright of the Fund for Oregon Rural Journalism, have voiced strong support for the warning, calling for greater action to defend the integrity of local news.

For more information on these topics, visit diplomacy.edu.