Google uses AI and human reviews to fight ad fraud

Google has revealed it suspended 39.2 million advertiser accounts in 2024, more than triple the number from the previous year, as part of its latest push to combat ad fraud.

The tech giant said it is now able to block most bad actors before they even run an advert, thanks to advanced large language models and detection signals such as fake business details and fraudulent payments.

Instead of relying solely on AI, a team of over 100 experts from across Google and DeepMind also reviews deepfake scams and develops targeted countermeasures.

The company rolled out more than 50 LLM-based safety updates last year and introduced over 30 changes to advertising and publishing policies. These efforts, alongside other technical reinforcements, led to a 90% drop in reports of deepfake ads.

While the US saw the highest number of suspensions, with all 39.2 million accounts coming from there alone, India followed with 2.9 million accounts taken down. In both countries, ads were removed for violations such as trademark abuse, misleading personalisation, and financial service scams.

Overall, Google blocked 5.1 billion ads globally and restricted another 9.1 billion, instead of allowing harmful content to spread unchecked. Nearly half a billion of those removed were linked specifically to scam activity.

In a year when half the global population headed to the polls, Google also verified over 8,900 election advertisers and took down 10.7 million political ads.

While the scale of suspensions may raise concerns about fairness, Google said human reviews are included in the appeals process.

The company acknowledged previous confusion over enforcement clarity and is now updating its messaging to ensure advertisers understand the reasons behind account actions more clearly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gains new tool for image access

OpenAI has introduced a new image library feature for ChatGPT, allowing users to easily access and manage their AI-generated images. The feature is now rolling out to Free, Plus, and Pro users across both mobile and web platforms.

The new library appears in the ChatGPT sidebar under a ‘Library’ section, where users can browse a visual grid of their previously created images. A quick-access button also lets users generate new images directly from the same screen.

While the feature is already available in the iOS app, some users report it has not yet reached the web version, though its arrival is expected soon.

Designed to improve image accessibility and organisation, the feature will be especially useful for those who regularly create visuals through ChatGPT. Whether users are revisiting whimsical creations or practical graphics, the image library offers a convenient way to view and manage their visual content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Inephany raises $2.2M to make AI training more efficient

London-based AI startup Inephany has secured $2.2 million in pre-seed funding to develop technology aimed at making the training of neural networks—particularly large language models—more efficient and affordable.

The investment round was led by Amadeus Capital Partners, with participation from Sure Valley Ventures and AI pioneer Professor Steve Young, who joins as both chair and angel investor.

Founded in July 2024 by Dr John Torr, Hami Bahraynian, and Maurice von Sturm, Inephany is building an AI-driven platform that improves training efficiency in real time.

By increasing sample efficiency and reducing computing demands, the company hopes to dramatically cut the cost and time of training cutting-edge models.

The team claims their solution could make AI model development at least ten times more cost-effective compared to current methods.

The funding will support growth of Inephany’s engineering team and accelerate the launch of its first product later this year.

With the costs of training state-of-the-art models now reaching into the hundreds of millions, the startup’s platform aims to make high-performance AI development more sustainable and accessible across industries such as healthcare, weather forecasting, and drug discovery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude can now read your Gmail and Docs

Anthropic has introduced a new integration that allows its AI chatbot, Claude, to connect directly with Google Workspace.

The feature, now in beta for premium subscribers, enables Claude to reference content from Gmail, Google Calendar, and Google Docs to deliver more personalised and context-aware responses.

Users can expect in-line citations showing where specific information originated from within their Google account.

This integration is available for subscribers on the Max, Team, Enterprise, and Pro plans, though multi-user accounts require administrator approval.

While Claude can read emails and review documents, it cannot send emails or schedule events. Anthropic insists the system uses strict access controls and does not train its models on user data by default.

The update arrives as part of Anthropic’s broader efforts to enhance Claude’s appeal in a competitive AI landscape.

Alongside the Workspace integration, the company launched Claude Research, a tool that performs real-time web searches to provide fast, in-depth answers.

Although still smaller than ChatGPT’s user base, Claude is steadily growing, reaching 3.3 million web users in March 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI chip production begins at TSMC’s Arizona facility

Nvidia has announced a major initiative to produce AI supercomputers in the US in collaboration with Taiwan Semiconductor Manufacturing Co. (TSMC) and several other partners.

The effort aims to create up to US$500 billion worth of AI infrastructure products domestically over the next four years, marking a significant shift in Nvidia’s manufacturing strategy.

Alongside TSMC, other key contributors include Taiwanese firms Hon Hai Precision Industry Co. and Wistron Corp., both known for producing AI servers. US-based Amkor Technology and Taiwan’s Siliconware Precision Industries will also provide advanced packaging and testing services.

Nvidia’s Blackwell AI chips have already begun production at TSMC’s Arizona facility, with large-scale operations planned in Texas through partnerships with Hon Hai in Houston and Wistron in Dallas.

The move could impact Taiwan’s economy, as many Nvidia components are currently produced there. Taiwan’s Economic Affairs Minister declined to comment specifically on the project but assured that the government will monitor overseas investments by Taiwanese firms.

Nvidia said the initiative would help meet surging AI demand while strengthening semiconductor supply chains and increasing resilience amid shifting global trade policies, including new US tariffs on Taiwanese exports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

South Korea’s $23B chip industry boost in response to global trade war

South Korea announced a $23 billion support package for its semiconductor industry, increasing from last year’s $19 billion to protect giants like Samsung and SK Hynix from US tariff uncertainties and China’s growing competition

The plan allocates 20 trillion won in financial aid, up from 17 trillion, to drive innovation and production, addressing a 31.8% drop in chip exports to China due to US trade restrictions.

The package responds to US policies under President Trump, including export curbs on high-bandwidth chips to China, which have disrupted global demand. 

At the same time, Finance Minister Choi Sang-mok will negotiate with the US to mitigate potential national security probes on chip trade. 

South Korea’s strategy aims to safeguard a critical economic sector that powers everything from smartphones to AI, especially as its auto industry faces US tariff challenges. 

Analysts view this as a preemptive effort to shield the chip industry from escalating global trade tensions.

Why does it matter?

For South Koreans, the semiconductor sector is a national lifeline, tied to jobs and economic stability, with the government betting big to preserve its global tech dominance. As China’s tech ambitions grow and US policies remain unpredictable, Seoul’s $23 billion investment speaks out about the cost of staying competitive in a tech-driven world.

Nvidia hit by the new US export rules

Nvidia is facing fresh US export restrictions on its H20 AI chips, dealing a blow to the company’s operations in China.

In a filing on Tuesday, Nvidia revealed it now needs a licence to export these chips indefinitely, after the US government cited concerns they could be used in a Chinese supercomputer.

The company expects a $5.5 billion charge linked to the controls in its first fiscal quarter of 2026, which ends on 27 April. Shares dropped around 6% in after-hours trading.

The H20 is currently the most advanced AI chip Nvidia can sell to China under existing regulations.

Last week, reports suggested CEO Jensen Huang might have temporarily eased tensions during a dinner at Donald Trump’s Mar-a-Lago resort, by promising investments in US-based AI data centres instead of opposing the rules directly.

Just a day before the filing, Nvidia announced plans to manufacture some chips in the US over the next four years, though the specifics were left vague.

Calls for tighter controls had been building, especially after it emerged that China’s DeepSeek used the H20 to train its R1 model, a system that surprised the US AI sector earlier this year.

Government officials had pushed for action, saying the chip’s capabilities posed a strategic risk. Nvidia declined to comment on the new restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI updates safety rules amid AI race

OpenAI has updated its Preparedness Framework, the internal system used to assess AI model safety and determine necessary safeguards during development.

The company now says it may adjust its safety standards if a rival AI lab releases a ‘high-risk’ system without similar protections, a move that reflects growing competitive pressure in the AI industry.

Instead of outright dismissing such flexibility, OpenAI insists that any changes would be made cautiously and with public transparency.

Critics argue OpenAI is already lowering its standards for the sake of faster deployment. Twelve former employees recently supported a legal case against the company, warning that a planned corporate restructure might encourage further shortcuts.

OpenAI denies these claims, but reports suggest compressed safety testing timelines and increasing reliance on automated evaluations instead of human-led reviews. According to sources, some safety checks are also run on earlier versions of models, not the final ones released to users.

The refreshed framework also changes how OpenAI defines and manages risk. Models are now classified as having either ‘high’ or ‘critical’ capability, the former referring to systems that could amplify harm, the latter to those introducing entirely new risks.

Instead of deploying models first and assessing risk later, OpenAI says it will apply safeguards during both development and release, particularly for models capable of evading shutdown, hiding their abilities, or self-replicating.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

xAI adds collaborative workspace to Grok

Elon Musk’s AI firm xAI has introduced a new feature called Grok Studio, offering users a dedicated space to create and edit documents, code, and simple apps.

Available on Grok.com for both free and paying users, Grok Studio opens content in a separate window, allowing for real-time collaboration between the user and the chatbot instead of relying solely on back-and-forth prompts.

Grok Studio functions much like canvas-style tools from other AI developers. It allows code previews and execution in languages such as Python, C++, and JavaScript. The setup mirrors similar features introduced earlier by OpenAI and Anthropic, instead of offering a radically different experience.

All content appears beside Grok’s chat window, creating a workspace that blends conversation with practical development tools.

Alongside this launch, xAI has also announced integration with Google Drive.

It will allow users to attach files directly to Grok prompts, letting the chatbot work with documents, spreadsheets, and slides from Drive instead of requiring uploads or manual input, making the platform more convenient for everyday tasks and productivity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

People are forming emotional bonds with AI chatbots

AI is reshaping how people connect emotionally, with millions turning to chatbots for companionship, guidance, and intimacy.

From virtual relationships to support with mental health and social navigation, personified AI assistants such as Replika, Nomi, and ChatGPT are being used by over 100 million people globally.

These apps simulate human conversation through personalised learning, allowing users to form what some consider meaningful emotional bonds.

For some, like 71-year-old Chuck Lohre from the US, chatbots have evolved into deeply personal companions. Lohre’s AI partner, modelled after his wife, helped him process emotional insights about his real-life marriage, despite elements of romantic and even erotic roleplay.

Others, such as neurodiverse users like Travis Peacock, have used chatbots to enhance communication skills, regulate emotions, and build lasting relationships, reporting a significant boost in personal and professional life.

While many users speak positively about these interactions, concerns persist over the nature of such bonds. Experts argue that these connections, though comforting, are often one-sided and lack the mutual growth found in real relationships.

A UK government report noted widespread discomfort with the idea of forming personal ties with AI, suggesting the emotional realism of chatbots may risk deepening emotional dependence without true reciprocity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!