German court allows Meta to use Facebook and Instagram data

A German court has ruled in favour of Meta, allowing the tech company to use data from Facebook and Instagram to train AI systems. A Cologne court ruled Meta had not breached the EU law and deemed its AI development a legitimate interest.

According to the court, Meta is permitted to process public user data without explicit consent. Judges argued that training AI systems could not be achieved by other equally effective and less intrusive methods.

They noted that Meta plans to use only publicly accessible data and had taken adequate steps to inform users via its mobile apps.

Despite the ruling, the North Rhine-Westphalia Consumer Advice Centre remains critical, raising concerns about legality and user privacy. Privacy group Noyb also challenged the decision, warning it could take further legal action, including a potential class-action lawsuit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Apple’s smart glasses may launch in 2025 with voice and AI Features

Apple is reportedly planning to launch its own smart glasses by the end of 2025, positioning the device as a more premium alternative to Meta’s Ray-Ban smart glasses.

According to Bloomberg, the wearable will include built-in cameras, microphones, and speakers, offering users capabilities like taking calls, playing music, navigating directions, and translating languages in real time.

The glasses are expected to rely on Siri for voice commands and real-world analysis. A source familiar with the project said Apple aims to outperform Meta’s product in both build quality and features, though the price is also expected to be significantly higher.

One key uncertainty is whether Apple’s updated Siri with generative AI capabilities will be ready in time for launch. Unlike Meta’s Llama or Google’s Gemini platforms, Apple’s AI infrastructure is still under development.

Currently, Apple relies on third-party systems like Google Lens and OpenAI through iPhone features such as Visual Intelligence, but the company may seek to replace these with its own technology in the upcoming device.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta and PayPal users targeted in new phishing scam

Cybersecurity experts are warning of a rapid and highly advanced phishing campaign that targets Meta and PayPal users with instant account takeovers. The attack exploits Google’s AppSheet platform to send emails from a legitimate domain, bypassing standard security checks.

Victims are tricked into entering login details and two-factor authentication codes, which are then harvested in real time. Emails used in the campaign pose as urgent security alerts from Meta or PayPal, urging recipients to click a fake appeal link.

A double-prompt technique falsely claims an initial login attempt failed, increasing the likelihood of accurate information being submitted. KnowBe4 reports that 98% of detected threats impersonated Meta, with the remaining targeting PayPal.

Google confirmed it has taken steps to reduce the campaign’s impact by improving AppSheet security and deploying advanced Gmail protections. The company advised users to stay alert and consult their guide to spotting scams. Meta and PayPal have not yet commented on the situation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI benchmarking practices under scrutiny

Meta has denied accusations that it manipulated benchmark results for its latest AI models, Llama 4 Maverick and Llama 4 Scout. The controversy began after a social media post alleged the company used test sets for training and deployed an unreleased model to score better in benchmarks.

Ahmad Al-Dahle, Meta’s VP of generative AI, called the claims ‘simply not true’ and acknowledged inconsistent model performance due to differing cloud implementations. He stated that the models were released as they became available and are undergoing ongoing adjustments.

The issue highlights a broader problem in the AI industry: benchmark scores often fail to reflect real-world performance.

Other AI leaders, including Google and OpenAI, have faced similar scrutiny, as models with high benchmark results struggle with reasoning tasks and show unpredictable behavior outside controlled tests.

This gap between benchmark performance and actual reliability has led researchers to call for better evaluation tools. Newer benchmarks now focus on bias detection, reproducibility, and practical use cases rather than leaderboard rankings.

Meta’s situation reflects a wider industry shift toward more meaningful metrics that capture both performance and ethical concerns in real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta aims to boost Llama adoption among startups

Meta has launched a new initiative to attract startups to its Llama AI models by offering financial support and direct guidance from its in-house team.

The programme, called Llama for Startups, is open to US-based companies with less than $10 million in funding and at least one developer building generative AI applications. Eligible firms can apply by 30 May.

Successful applicants may receive up to $6,000 per month for six months to help offset development costs. Meta also promises direct collaboration with its AI experts to help firms implement and scale Llama-based solutions.

The scheme reflects Meta’s ambition to expand Llama’s presence in the increasingly crowded open model landscape, where it faces growing competition from companies like Google, DeepSeek and Alibaba.

Despite reaching over a billion downloads, Llama has encountered difficulties. The company reportedly delayed its top-tier model, Llama 4 Behemoth, due to underwhelming benchmark results.

Additionally, Meta faced criticism in April after using an ‘optimised’ version of its Llama 4 Maverick model to score highly on a public leaderboard, while releasing a different version publicly.

Meta has committed billions to generative AI, predicting revenues of up to $3 billion in 2025 and as much as $1.4 trillion by 2035.

With revenue-sharing agreements, custom APIs, and plans for ad-supported AI assistants, the company is investing heavily in infrastructure, possibly spending up to $80 billion next year on new data centres to support its expansive AI goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Taiwan targets Facebook scam ads with new penalties

Taiwan’s Ministry of Digital Affairs plans to impose penalties on Meta for failing to enforce real-name verification on Facebook ads, according to Minister Huang Yen-nan. The move follows a recent meeting with law enforcement and growing concerns over scam-related losses.

A report from CommonWealth Magazine found Taiwanese victims lose NT$400 million (US$13 million) daily to scams, with 70% of losses tied to Facebook. Facebook has been the top scam-linked platform for two years, with over 60% of users reporting exposure to fraudulent content.

From April 2023 to September 2024, nearly 59,000 scam ads were found across Facebook and Google. One Facebook group in Chiayi County, with 410,000 members, was removed after being overwhelmed with daily fake job ads.

Huang identified Meta as the more problematic platform, saying 60% to 70% of financial scams stem from Facebook ads. Police have referred 15 cases to the ministry since May, but only two resulted in fines due to incomplete advertiser information.

Legislator Hung Mung-kai criticized delays in enforcement, noting that new anti-fraud laws took effect in February, but actions only began in May. Huang defended the process, stating platforms typically comply with takedown requests and real-name rules.

Under current law, scam ads must be removed within 24 hours of being reported. The ministry has used AI to detect and remove approximately 100,000 scam ads recently. Officials are now planning face-to-face meetings with Meta to demand stronger ad oversight.

Deputy Interior Minister Ma Shi-yuan called on platforms like Facebook and Line to improve ad screening, emphasizing that law enforcement alone cannot manage the volume of online content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s Behemoth AI model faces setback

Meta Platforms has postponed the release of its flagship AI model, known as ‘Behemoth,’ due to internal concerns about its performance, according to a report by the Wall Street Journal.

Instead of launching as planned, engineers are struggling to deliver improvements that would meaningfully advance the model beyond earlier versions.

Behemoth was originally scheduled for release in April to coincide with Meta’s first AI developer conference but was quietly delayed to June. The latest update suggests the launch has now been pushed to autumn or later, as internal doubts grow over whether it is ready for public deployment.

In April, Meta previewed Behemoth under the Llama 4 line, calling it ‘one of the smartest LLMs in the world’ and positioning it as a teaching model for future AI systems. Instead of Behemoth, Meta released Llama 4 Scout and Llama 4 Maverick as the latest iterations in its AI portfolio.

The delay comes amid intense competition in the generative AI space, where rivals like Google, OpenAI, and Anthropic continue advancing their models. Meta appears to be opting for caution instead of rushing an underwhelming product to market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta targets critics as FTC case unfolds

Long-standing friction between Big Tech and the media resurfaced during Meta’s antitrust trial with the Federal Trade Commission this week. In a heated courtroom exchange, Meta’s legal team used critical commentary from prominent tech journalists to cast doubt on the FTC’s case.

Meta’s lead attorney, Mark Hansen, questioned the credibility of FTC expert Scott Hemphill by referencing a 2019 antitrust pitch Hemphill co-authored with Facebook co-founder Chris Hughes and former White House advisor Tim Wu.

The presentation cited public statements from reporters Kara Swisher and Om Malik as evidence of Meta’s dominance and aggressive acquisitions.

Hansen dismissed Malik as a ‘failed blogger’ with personal bias and accused Swisher of similar hostility, projecting a headline where she described Mark Zuckerberg as a ‘small little creature with a shriveled soul.’

He also attempted to discredit a cited New York Post article by invoking the tabloid’s notorious ‘Headless Body in Topless Bar’ cover.

These moments highlight Meta’s growing resentment toward the press, which has intensified alongside rising criticism of its business practices. Once seen as scrappy disruptors, Facebook and other tech giants now face regular scrutiny—and appear eager to push back.

Swisher and Malik have both openly criticized Meta in the past. Swisher famously challenged Zuckerberg over content moderation and political speech, while Malik has questioned the company’s global expansion strategies.

Their inclusion in a legal document presented in court underscores how media commentary is influencing regulatory narratives. Meta has previously blamed critical press for damaging user sentiment in the wake of scandals like Cambridge Analytica.

The FTC argues that consistent engagement levels despite bad press prove Meta’s monopoly power—users feel they have no real alternatives to Facebook and Instagram. As the trial continues, so too does Meta’s public battle—not just with regulators, but with the journalists documenting its rise and reckoning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ray-Ban Meta smart glasses priced at ₹29,990 for Indian market

Meta has announced that its Ray-Ban Meta smart glasses will go on sale in India starting 19 May, with prices starting at ₹29,990 (approximately $353). The glasses are currently available for pre-order via Ray-Ban’s official website and will be available in Ray-Ban retail stores across the country at launch.

The smart glasses support Meta AI, allowing users to ask questions about their surroundings, send messages, make phone calls, and even translate languages in real time. The AI assistant can process both visual and audio input and operate even while the user is offline.

At present, live translation features are available for English, French, Italian, and Spanish, though Meta has not yet added support for Indian languages. The glasses also integrate with music apps such as Spotify, Apple Music, Amazon Music, and Shazam for on-the-go audio playback.

Meta says it has sold around 2 million pairs globally since the smart glasses first launched in 2023. The debut in India marks a major expansion into a key global market, though support for regional language features remains a limitation for now.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

German watchdog demands Meta stop AI training with EU user data

The Verbraucherzentrale North Rhine-Westphalia (NRW), a regional data protection authority in Germany, has issued a formal warning to Meta, urging the tech giant to stop training its AI models using data from European users.

The regulator argues that Meta’s current approach violates EU privacy laws and may lead to legal action if not halted. Meta recently announced that it would use content from Facebook, Instagram, WhatsApp, and Messenger—including posts, comments, and public interactions—to train its AI systems in Europe.

The company claims this will improve the performance of Meta AI by helping it better understand European languages, culture, and history.

However, data protection authorities from several EU countries, including Belgium, France, and the Netherlands, have expressed concern and encouraged users to act before Meta’s new privacy policy takes effect on 27 May.

The NRW DPA took the additional step of sending Meta a cease-and-desist letter on 30 April. Should Meta ignore the request, legal action could follow.

Christine Steffen, data protection expert at NRW, said that once personal data is used to train AI, it becomes nearly impossible to reverse. She criticised Meta’s opt-out model and insisted that meaningful user consent is legally required.

Austrian privacy advocate Max Schrems, head of the NGO Noyb, also condemned Meta’s strategy, accusing the company of ignoring EU privacy law in favour of commercial gain.

‘Meta should simply ask the affected people for their consent,’ he said, warning that failure to do so could have consequences across the EU.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!