AI to take over all Meta ads under new plan

Meta is preparing to transform digital advertising on its platforms, with reports indicating that by 2026, all adverts on Facebook and Instagram could be fully created and targeted using AI.

The company’s vision would see AI tools take over the entire process—from ad generation to audience selection—requiring advertisers to provide only a product image and budget.

Since introducing generative AI features for advertisers in May 2023, Meta has continued to expand its automation capabilities. Currently, AI plays a major role in targeting ads across Meta’s platforms.

Under the new system, Meta’s AI will go several steps further by generating text, visuals, and video, as well as optimising ad delivery for the most suitable audience.

The initiative is aligned with CEO Mark Zuckerberg’s broader vision of AI-led automation, especially within advertising—Meta’s financial backbone, which accounted for over 97% of the company’s revenue last year.

Speaking at Meta’s annual shareholder meeting, Zuckerberg outlined a future where businesses simply define their marketing goal and budget, link a payment method, and allow Meta’s AI to handle the rest.

The company is also developing real-time personalisation tools. These will allow the same ad to appear differently depending on a user’s location or context—for example, showing a car in snowy terrain to one user, while another might see it in an urban setting.

Meta is also exploring integration with third-party AI models such as DALL·E and Midjourney to further enhance creative capabilities.

This move follows similar developments by rivals like Google, which recently launched its Veo video generation model. With AI continuing to reshape the advertising landscape, Meta is betting on full automation as the next frontier in digital marketing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp fixes deleted message privacy gap

WhatsApp is rolling out a privacy improvement that ensures deleted messages no longer linger in quoted replies, addressing a long-standing issue that exposed partial content users had intended to remove.

The update applies automatically, with no toggle required, and has begun reaching iOS users through version 25.12.73, with wider availability expected soon.

Until now, deleting a message for everyone in a chat has not removed it from quoted replies. That allowed fragments of deleted content to remain visible, undermining the purpose of deletion.

WhatsApp removes the associated quoted message entirely instead of keeping it in conversation threads, even in group or community chats.

WABetaInfo, which first spotted the update, noted that users delete messages for privacy or personal reasons, and leave behind quoted traces conflicted with those intentions.

The change ensures conversations reflect user expectations by entirely erasing deleted content, not only from the original message but also from any references.

Meta continues to develop new features for WhatsApp. Recent additions include voice chat in groups and a native interface for iPad. The company is also testing tools like AI-generated wallpapers, message summaries, and more refined privacy settings to enhance user control and experience further.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces backlash over open source AI claims

Meta is under renewed scrutiny for what critics describe as ‘open washing’ after sponsoring a Linux Foundation whitepaper on the benefits of open source AI.

The paper highlights how open models help reduce enterprise costs—claiming companies using proprietary AI tools spend over three times more. However, Meta’s involvement has raised questions, as its Llama AI models are presented as open source despite industry experts insisting otherwise.

Amanda Brock, head of OpenUK, argues that Llama does not meet accepted definitions of open source due to licensing terms that restrict commercial use.

She referenced the Open Source Initiative’s (OSI) standards, which Llama fails to meet, pointing to the presence of commercial limitations that contradict open source principles. Brock noted that open source should allow unrestricted use, which Llama’s license does not support.

Meta has long branded its Llama models as open source, but the OSI and other stakeholders have repeatedly pushed back, stating that the company’s licensing undermines the very foundation of open access.

While Brock acknowledged Meta’s contribution to the broader open source conversation, she also warned that such mislabelling could have serious consequences—especially as lawmakers and regulators increasingly reference open source in crafting AI legislation.

Other firms have faced similar allegations, including Databricks with its DBRX model in 2024, which was also criticised for failing to meet OSI standards. As the AI sector continues to evolve, the line between truly open and merely accessible models remains a point of growing tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

German court allows Meta to use Facebook and Instagram data

A German court has ruled in favour of Meta, allowing the tech company to use data from Facebook and Instagram to train AI systems. A Cologne court ruled Meta had not breached the EU law and deemed its AI development a legitimate interest.

According to the court, Meta is permitted to process public user data without explicit consent. Judges argued that training AI systems could not be achieved by other equally effective and less intrusive methods.

They noted that Meta plans to use only publicly accessible data and had taken adequate steps to inform users via its mobile apps.

Despite the ruling, the North Rhine-Westphalia Consumer Advice Centre remains critical, raising concerns about legality and user privacy. Privacy group Noyb also challenged the decision, warning it could take further legal action, including a potential class-action lawsuit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple’s smart glasses may launch in 2025 with voice and AI Features

Apple is reportedly planning to launch its own smart glasses by the end of 2025, positioning the device as a more premium alternative to Meta’s Ray-Ban smart glasses.

According to Bloomberg, the wearable will include built-in cameras, microphones, and speakers, offering users capabilities like taking calls, playing music, navigating directions, and translating languages in real time.

The glasses are expected to rely on Siri for voice commands and real-world analysis. A source familiar with the project said Apple aims to outperform Meta’s product in both build quality and features, though the price is also expected to be significantly higher.

One key uncertainty is whether Apple’s updated Siri with generative AI capabilities will be ready in time for launch. Unlike Meta’s Llama or Google’s Gemini platforms, Apple’s AI infrastructure is still under development.

Currently, Apple relies on third-party systems like Google Lens and OpenAI through iPhone features such as Visual Intelligence, but the company may seek to replace these with its own technology in the upcoming device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and PayPal users targeted in new phishing scam

Cybersecurity experts are warning of a rapid and highly advanced phishing campaign that targets Meta and PayPal users with instant account takeovers. The attack exploits Google’s AppSheet platform to send emails from a legitimate domain, bypassing standard security checks.

Victims are tricked into entering login details and two-factor authentication codes, which are then harvested in real time. Emails used in the campaign pose as urgent security alerts from Meta or PayPal, urging recipients to click a fake appeal link.

A double-prompt technique falsely claims an initial login attempt failed, increasing the likelihood of accurate information being submitted. KnowBe4 reports that 98% of detected threats impersonated Meta, with the remaining targeting PayPal.

Google confirmed it has taken steps to reduce the campaign’s impact by improving AppSheet security and deploying advanced Gmail protections. The company advised users to stay alert and consult their guide to spotting scams. Meta and PayPal have not yet commented on the situation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI benchmarking practices under scrutiny

Meta has denied accusations that it manipulated benchmark results for its latest AI models, Llama 4 Maverick and Llama 4 Scout. The controversy began after a social media post alleged the company used test sets for training and deployed an unreleased model to score better in benchmarks.

Ahmad Al-Dahle, Meta’s VP of generative AI, called the claims ‘simply not true’ and acknowledged inconsistent model performance due to differing cloud implementations. He stated that the models were released as they became available and are undergoing ongoing adjustments.

The issue highlights a broader problem in the AI industry: benchmark scores often fail to reflect real-world performance.

Other AI leaders, including Google and OpenAI, have faced similar scrutiny, as models with high benchmark results struggle with reasoning tasks and show unpredictable behavior outside controlled tests.

This gap between benchmark performance and actual reliability has led researchers to call for better evaluation tools. Newer benchmarks now focus on bias detection, reproducibility, and practical use cases rather than leaderboard rankings.

Meta’s situation reflects a wider industry shift toward more meaningful metrics that capture both performance and ethical concerns in real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta aims to boost Llama adoption among startups

Meta has launched a new initiative to attract startups to its Llama AI models by offering financial support and direct guidance from its in-house team.

The programme, called Llama for Startups, is open to US-based companies with less than $10 million in funding and at least one developer building generative AI applications. Eligible firms can apply by 30 May.

Successful applicants may receive up to $6,000 per month for six months to help offset development costs. Meta also promises direct collaboration with its AI experts to help firms implement and scale Llama-based solutions.

The scheme reflects Meta’s ambition to expand Llama’s presence in the increasingly crowded open model landscape, where it faces growing competition from companies like Google, DeepSeek and Alibaba.

Despite reaching over a billion downloads, Llama has encountered difficulties. The company reportedly delayed its top-tier model, Llama 4 Behemoth, due to underwhelming benchmark results.

Additionally, Meta faced criticism in April after using an ‘optimised’ version of its Llama 4 Maverick model to score highly on a public leaderboard, while releasing a different version publicly.

Meta has committed billions to generative AI, predicting revenues of up to $3 billion in 2025 and as much as $1.4 trillion by 2035.

With revenue-sharing agreements, custom APIs, and plans for ad-supported AI assistants, the company is investing heavily in infrastructure, possibly spending up to $80 billion next year on new data centres to support its expansive AI goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan targets Facebook scam ads with new penalties

Taiwan’s Ministry of Digital Affairs plans to impose penalties on Meta for failing to enforce real-name verification on Facebook ads, according to Minister Huang Yen-nan. The move follows a recent meeting with law enforcement and growing concerns over scam-related losses.

A report from CommonWealth Magazine found Taiwanese victims lose NT$400 million (US$13 million) daily to scams, with 70% of losses tied to Facebook. Facebook has been the top scam-linked platform for two years, with over 60% of users reporting exposure to fraudulent content.

From April 2023 to September 2024, nearly 59,000 scam ads were found across Facebook and Google. One Facebook group in Chiayi County, with 410,000 members, was removed after being overwhelmed with daily fake job ads.

Huang identified Meta as the more problematic platform, saying 60% to 70% of financial scams stem from Facebook ads. Police have referred 15 cases to the ministry since May, but only two resulted in fines due to incomplete advertiser information.

Legislator Hung Mung-kai criticized delays in enforcement, noting that new anti-fraud laws took effect in February, but actions only began in May. Huang defended the process, stating platforms typically comply with takedown requests and real-name rules.

Under current law, scam ads must be removed within 24 hours of being reported. The ministry has used AI to detect and remove approximately 100,000 scam ads recently. Officials are now planning face-to-face meetings with Meta to demand stronger ad oversight.

Deputy Interior Minister Ma Shi-yuan called on platforms like Facebook and Line to improve ad screening, emphasizing that law enforcement alone cannot manage the volume of online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!