Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hertz customer data stolen in vendor cyberattack

Hertz has disclosed a significant data breach involving sensitive customer information, including credit card and driver’s licence details, following a cyberattack on one of its service providers.

The breach stemmed from vulnerabilities in the Cleo Communications file transfer platform, exploited in October and December 2024.

Hertz confirmed the unauthorised access on 10 February, with further investigations revealing a range of exposed data, including names, birth dates, contact details, and in some cases, Social Security and passport numbers.

While the company has not confirmed how many individuals were affected, notifications have been issued in the US, UK, Canada, Australia, and across the EU.

Hertz stressed that no misuse of customer data has been identified so far, and that the breach has been reported to law enforcement and regulators. Cleo has since patched the exploited vulnerabilities.

The identity of the attackers remains unknown. However, Cleo was previously targeted in a broader cyber campaign last October, with the Clop ransomware group later claiming responsibility.

The gang published Cleo’s company data online and listed dozens of breached organisations, suggesting the incident was part of a wider, coordinated effort.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

People are forming emotional bonds with AI chatbots

AI is reshaping how people connect emotionally, with millions turning to chatbots for companionship, guidance, and intimacy.

From virtual relationships to support with mental health and social navigation, personified AI assistants such as Replika, Nomi, and ChatGPT are being used by over 100 million people globally.

These apps simulate human conversation through personalised learning, allowing users to form what some consider meaningful emotional bonds.

For some, like 71-year-old Chuck Lohre from the US, chatbots have evolved into deeply personal companions. Lohre’s AI partner, modelled after his wife, helped him process emotional insights about his real-life marriage, despite elements of romantic and even erotic roleplay.

Others, such as neurodiverse users like Travis Peacock, have used chatbots to enhance communication skills, regulate emotions, and build lasting relationships, reporting a significant boost in personal and professional life.

While many users speak positively about these interactions, concerns persist over the nature of such bonds. Experts argue that these connections, though comforting, are often one-sided and lack the mutual growth found in real relationships.

A UK government report noted widespread discomfort with the idea of forming personal ties with AI, suggesting the emotional realism of chatbots may risk deepening emotional dependence without true reciprocity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Justice Department pushes to curb Google monopoly

Google has pushed back against a US government proposal to break up its business, arguing that such a move would hurt consumers and reduce competition rather than enhance it.

In a court filing ahead of a remedy trial due to begin on 21 April, Google claimed the Justice Department’s plan to divest services like Chrome and Android would force users to adopt less effective alternatives.

The company stressed that consumers overwhelmingly prefer Google’s search engine and that its agreements with browser and device manufacturers do not prevent rivals from competing.

The Justice Department is asking the court to consider structural remedies, including breaking up parts of Google’s business or limiting its default search agreements, to curb what it deems monopolistic behaviour.

The agency originally proposed more aggressive action, such as divesting Google’s AI investments, but later backed down, citing concerns over unintended consequences in the fast-evolving AI sector.

Google has offered alternative remedies, including more flexibility for Android manufacturers to preload or set other search engines as default, without fully removing its own search partnerships.

A 15-day hearing will begin later this month, with both sides set to present evidence and call high-profile witnesses. Google’s CEO Sundar Pichai and Apple’s senior VP of services are among the 20 witnesses listed by the tech giant.

The Justice Department plans to call 19 witnesses, including executives from OpenAI, DuckDuckGo and Microsoft, as it argues for stronger measures to level the playing field in internet search.

For more information on these topics, visit diplomacy.edu.

Trump-backed WLFI boosts crypto portfolio with SEI token acquisition

World Liberty Financial (WLFI), a cryptocurrency project backed by the Trump family, has added 4.89 million SEI tokens to its portfolio. The purchase, valued at approximately $775,000, was made by using USDC transferred from the project’s main wallet.

The move increases WLFI’s growing collection of altcoins, which includes Bitcoin (BTC), Ether (ETH), and Tron (TRX). WLFI’s total portfolio now includes 11 different tokens, amounting to over $346 million in investments.

Despite this large accumulation, the project has yet to realise any profits, with its portfolio currently down by $145.8 million. Its Ethereum holdings have suffered a particular blow, with losses exceeding $114 million.

The SEI acquisition comes amid growing speculation surrounding the Trump family’s involvement in the crypto market. WLFI’s proposal for a USD1 stablecoin has raised concerns among lawmakers about its potential to replace the US dollar in federal transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zhipu AI launches free agent to rival DeepSeek

Chinese AI startup Zhipu AI has introduced a free AI agent, AutoGLM Rumination, aimed at assisting users with tasks such as web browsing, travel planning, and drafting research reports.

The product was unveiled by CEO Zhang Peng at an event in Beijing, where he highlighted the agent’s use of the company’s proprietary models—GLM-Z1-Air for reasoning and GLM-4-Air-0414 as the foundation.

According to Zhipu, the new GLM-Z1-Air model outperforms DeepSeek’s R1 in both speed and resource efficiency. The launch reflects growing momentum in China’s AI sector, where companies are increasingly focusing on cost-effective solutions to meet rising demand.

AutoGLM Rumination stands out in a competitive landscape by being freely accessible through Zhipu’s official website and mobile app, unlike rival offerings such as Manus’ subscription-only AI agent. The company positions this move as part of a broader strategy to expand access and adoption.

Founded in 2019 as a spinoff from Tsinghua University, Zhipu has developed the GLM model series and claims its GLM4 has surpassed OpenAI’s GPT-4 on several evaluation benchmarks.

In March, Zhipu secured major government-backed investment, including a 300 million yuan (US$41.5 million) contribution from Chengdu.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta to use EU user data for AI training amid scrutiny

Meta Platforms has announced it will begin using public posts, comments, and user interactions with its AI tools to train its AI models in the EU, instead of limiting training data to existing US-based inputs.

The move follows the recent European rollout of Meta AI, which had been delayed since June 2024 due to data privacy concerns raised by regulators. The company said EU users of Facebook and Instagram would receive notifications outlining how their data may be used, along with a link to opt out.

Meta clarified that while questions posed to its AI and public content from adult users may be used, private messages and data from under-18s would be excluded from training.

Instead of expanding quietly, the company is now making its plans public in an attempt to meet the EU’s transparency expectations.

The shift comes after Meta paused its original launch last year at the request of Ireland’s Data Protection Commission, which expressed concerns about using social media content for AI development. The move also drew criticism from advocacy group NOYB, which has urged regulators to intervene more decisively.

Meta joins a growing list of tech firms under scrutiny in Europe. Ireland’s privacy watchdog is already investigating Elon Musk’s X and Google for similar practices involving personal data use in AI model training.

Instead of treating such probes as isolated incidents, the EU appears to be setting a precedent that could reshape how global companies handle user data in AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta under fire for scrapping diversity and moderation policies

The NAACP Legal Defense Fund (LDF) has withdrawn from Meta’s civil rights advisory group, citing deep concerns over the company’s rollback of diversity, equity and inclusion (DEI) policies and changes to content moderation.

The decision follows Meta’s January announcement that it would end DEI programmes, eliminate factchecking teams, and revise moderation rules across its platforms.

Civil rights organisations, including LDF, expressed alarm at the time, warning that the changes could silence marginalised voices and increase the risk of online harm.

In a letter to Meta CEO Mark Zuckerberg, they criticised the company for failing to consult the advisory group or consider the impact on protected communities. LDF’s Todd A Cox later said the policy shift posed a ‘grave risk’ to Black communities and public discourse.

LDF also noted that the company had seen progress under previous DEI policies, including a significant increase in Black and Hispanic employees.

Its reversal, the group argues, may breach federal civil rights laws and expose Meta to legal consequences.

LDF urged Meta to assess the effects of its policy changes and increase transparency about how harmful content is reported and removed. Meta has not commented publicly on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft users at risk from tax-themed cyberattack

As the US tax filing deadline of April 15 approaches, cybercriminals are ramping up phishing attacks designed to exploit the urgency many feel during this stressful period.

Windows users are particularly at risk, as attackers are targeting Microsoft account credentials by distributing emails disguised as tax-related reminders.

These emails include a PDF attachment titled ‘urgent reminder,’ which contains a malicious QR code. Once scanned, it leads users through fake bot protection and CAPTCHA checks before prompting them to enter their Microsoft login details, details that are then sent to a server controlled by criminals.

Security researchers, including Peter Arntz from Malwarebytes, warn that the email addresses in these fake login pages are already pre-filled, making it easier for unsuspecting victims to fall into the trap.

Entering your password at this stage could hand your credentials to malicious actors, possibly operating from Russia, who may exploit your account for maximum profit.

The form of attack takes advantage of both the ticking tax clock and the stress many feel trying to meet the deadline, encouraging impulsive and risky clicks.

Importantly, this threat is not limited to Windows users or those filing taxes by the April 15 deadline. As phishing techniques become more advanced through the use of AI and automated smartphone farms, similar scams are expected to persist well beyond tax season.

The IRS rarely contacts individuals via email and never to request sensitive information through links or attachments, so any such message should be treated with suspicion instead of trust.

To stay safe, users are urged to remain vigilant and avoid clicking on links or scanning codes from unsolicited emails. Instead of relying on emails for tax updates or returns, go directly to official websites.

The IRS offers resources to help recognise and report scams, and reviewing this guidance could be an essential step in protecting your personal information, not just today, but in the months ahead.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!