Publishers lose traffic as readers trust AI more

Online publishers are facing an existential threat as AI increasingly becomes the primary source of information for users, warned Cloudflare CEO Matthew Prince during an Axios event in Cannes.

As AI-generated summaries dominate user queries, search engine referrals have plunged, urgently pushing media outlets to reconsider how they sustain revenue from their content.

Traffic patterns have dramatically shifted. A decade ago, Google sent a visitor to publishers for every two pages it crawled.

Today, that ratio has ballooned to 18:1. The picture is more extreme for AI firms: OpenAI’s ratio has jumped from 250:1 to 1,500:1 in just six months, while Anthropic’s has exploded from 6,000:1 to a staggering 60,000:1.

Although AI systems typically include links to sources, Prince noted that ‘people aren’t following the footnotes,’ meaning fewer clicks and less ad revenue.

Prince argued that audiences are beginning to trust AI summaries more than the original articles, reducing publishers’ visibility and direct engagement.

As the web becomes increasingly AI-mediated, fewer people read full articles, raising urgent questions about how creators and publishers can be fairly compensated.

To tackle the issue, Cloudflare is preparing to launch a new anti-scraping tool to block unauthorised data harvesting. Prince hinted that the tool has broad industry support and will be rolled out soon.

He remains confident in Cloudflare’s capacity to fight against such threats, noting the company’s daily battles against sophisticated global cyber actors.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Hidden privacy risk: Meta AI app may make sensitive chats public

Meta’s new AI app raises privacy concerns as users unknowingly expose sensitive personal information to the public.

The app includes a Discover feed where anyone can view AI chats — even those involving health, legal or financial data. Many users have accidentally shared full resumes, private conversations and medical queries without realising they’re visible to others.

Despite this, Meta’s privacy warnings are minimal. On iPhones, there’s no clear indication during setup that chats will be made public unless manually changed in settings.

Android users see a brief, easily missed message. Even the ‘Post to Feed’ button is ambiguous, often mistaken as referring to a user’s private chat history rather than public content.

Users must navigate deep into the app’s settings to make chats private. They can restrict who sees AI prompts there, stop sharing on Facebook and Instagram, and delete previous interactions.

Critics argue the app’s lack of clarity burdens users, leaving many at risk of oversharing without realising it.

While Meta describes the Discover feed as a way to explore creative AI usage, the result has been a chaotic mix of deeply personal content and bizarre prompts.

Privacy experts warn that the situation mirrors Meta’s longstanding issues with user data. Users are advised to avoid sharing personal details with the AI entirely and immediately turn off all public sharing options.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SoftBank plans $1 trillion AI and robotics park in Arizona

SoftBank founder Masayoshi Son is planning what could become his most audacious venture yet: a $1 trillion AI and robotics industrial park in Arizona.

Dubbed ‘Project Crystal Land’, the initiative aims to recreate a high-tech manufacturing hub reminiscent of China’s Shenzhen, focused on AI-powered robots and next-gen automation.

Son is courting global tech giants — including Taiwan Semiconductor Manufacturing Co. (TSMC) and Samsung — to join the vision, though none have formally committed.

The plan hinges on support from federal and state governments, with SoftBank already discussing possible tax breaks with US officials, including Commerce Secretary Howard Lutnick.

While TSMC is already investing $165 billion in Arizona facilities, sources suggest Son’s project has not altered the chipmaker’s current roadmap. SoftBank hopes to attract semiconductor and AI hardware leaders to power the park’s infrastructure.

Son has also approached SoftBank Vision Fund portfolio companies to participate, including robotics startup Agile Robots.

The park may serve as a production hub for emerging tech firms, complementing SoftBank’s broader investments, such as a potential $30 billion stake in OpenAI, a $6.5 billion acquisition of Ampere Computing, and funding for Stargate, a global data centre venture with OpenAI, Oracle, and MGX.

While the vision is still early, Project Crystal Land could radically shift US high-tech manufacturing. Son’s strategy relies heavily on project-based financing, allowing extensive infrastructure builds with minimal upfront capital.

As SoftBank eyes long-term AI growth and increased investor confidence, whether this futuristic park will become a reality — or another of Son’s high-stakes dreams remains to be seen.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act challenges 68% of European businesses, AWS report finds

As AI becomes integral to digital transformation, European businesses struggle to adapt to new regulations like the EU AI Act.

A report commissioned by AWS and Strand Partners revealed that 68% of surveyed companies find the EU AI Act difficult to interpret, with compliance absorbing around 40% of IT budgets.

Businesses unsure of regulatory obligations are expected to invest nearly 30% less in AI over the coming year, risking a slowdown in innovation across the continent.

The EU AI Act, effective since August 2024, introduces a phased risk-based framework to regulate AI in the EU. Some key provisions, including banned practices and AI literacy rules, are already enforceable.

Over the next year, further requirements will roll out, affecting AI system providers, users, distributors, and non-EU companies operating within the EU. The law prohibits exploitative AI applications and imposes strict rules on high-risk systems while promoting transparency in low-risk deployments.

AWS has reaffirmed its commitment to responsible AI, which is aligned with the EU AI Act. The company supports customers through initiatives like AI Service Cards, its Responsible AI Guide, and Bedrock Guardrails.

AWS was the first primary cloud provider to receive ISO/IEC 42001 certification for its AI offerings and continues to engage with the EU institutions to align on best practices. Amazon’s AI Ready Commitment also offers free education on responsible AI development.

Despite the regulatory complexity, AWS encourages its customers to assess how their AI usage fits within the EU AI Act and adopt safeguards accordingly.

As compliance remains a shared responsibility, AWS provides tools and guidance, but customers must ensure their applications meet the legal requirements. The company updates customers as enforcement advances and new guidance is issued.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

North Korea’s BlueNoroff uses deepfakes in Zoom calls to hack crypto workers

The North Korea-linked threat group BlueNoroff has been caught deploying deepfake Zoom meetings to target an employee at a cryptocurrency foundation, aiming to install malware on macOS systems.

According to cybersecurity firm Huntress, the attack began through a Telegram message that redirected the victim to a fake Zoom site. Over several weeks, the employee was lured into a group video call featuring AI-generated replicas of company executives.

When the employee encountered microphone issues during the meeting, the fake participants instructed them to download a Zoom extension, which instead executed a malicious AppleScript.

The script covertly fetched multiple payloads, installed Rosetta 2, and prompted for the system password while wiping command histories to hide forensic traces. Eight malicious binaries were uncovered on the compromised machine, including keyloggers, information stealers, and remote access tools.

BlueNoroff, also known as APT38 and part of the Lazarus Group, has a track record of targeting financial and blockchain organisations for monetary gain. The group’s past operations include the Bybit and Axie Infinity breaches.

Their campaigns often combine deep social engineering with sophisticated multi-stage malware tailored for macOS, with new tactics now mimicking audio and camera malfunctions to trick remote workers.

Cybersecurity analysts have noted that BlueNoroff has fractured into subgroups like TraderTraitor and CryptoCore, specialising in cryptocurrency theft.

Recent offshoot campaigns involve fake job interview portals and dual-platform malware, such as the Python-based PylangGhost and GolangGhost trojans, which harvest sensitive data from victims across operating systems.

The attackers have impersonated firms like Coinbase and Uniswap, mainly targeting users in India.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI-generated photo falsely claims to show a downed Israeli jet

Following Iranian state media claims that its forces shot down two Israeli fighter jets, an image circulated online falsely purporting to show the wreckage of an F-35.

The photo, which shows a large jet crash-landing in a desert, quickly spread across platforms like Threads and South Korean forums, including Aagag and Ruliweb. An Israeli official dismissed the shootdown claim as ‘fake news’.

The image’s caption in Korean read: ‘The F-35 shot down by Iran. Much bigger than I thought.’ However, a detailed AFP analysis found the photo contained several hallmarks of AI generation.

People near the aircraft appear the same size as buses, and one vehicle appears to merge with the road — visual anomalies common in synthetic images.

In addition to size distortions, the aircraft’s markings did not match those used on actual Israeli F-35s. Lockheed Martin specifications confirm the F-35 is just under 16 metres long, unlike the oversized version shown in the image.

Furthermore, the wing insignia in the image differed from the Israeli Air Force’s authentic emblem.

Amid escalating tensions between Iran and Israel, such misinformation continues to spread rapidly. Although AI-generated content is becoming more sophisticated, inconsistencies in scale, symbols, and composition remain key indicators of digital fabrication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

France 24 partners with Mediagenix to streamline on-demand programming

Mediagenix has entered a collaboration with French international broadcaster France 24, operated by France Médias Monde, to support its content scheduling modernisation programme.

As part of the upgrade, France 24 will adopt Mediagenix’s AI-powered, cloud-based scheduling solution to manage content across its on-demand platforms. The system promises improved operational flexibility, enabling rapid adjustments to programming in response to major events and shifting editorial priorities.

Pamela David, Engineering Manager for TV and Systems Integration at France Médias Monde, said: ‘This partnership with Mediagenix is a critical part of equipping our France 24 channels with the best scheduling and content management solutions.’

‘The system gives our staff the ultimate flexibility to adjust schedules as major events happen and react to changing news priorities.’

Françoise Semin, Chief Commercial Officer at Mediagenix, added: ‘France Médias Monde is a truly global broadcaster. We are delighted to support France 24’s evolving scheduling needs with our award-winning solution.’

Training for France 24 staff will be provided by Lapins Bleus Formation, based in Paris, ahead of the system’s planned rollout next year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Viper Technology sponsors rising AI talent for IOAI 2025 in China

Pakistani student Muhammad Ayan Abdullah has been selected to represent the country at the prestigious International Olympiad in Artificial Intelligence (IOAI), set to take place in Beijing, China, from 2–9 August 2025.

To support his journey, Viper Technology—a leading Pakistani IT hardware manufacturer—has partnered with the Punjab Information Technology Board (PITB) to provide Ayan with its flagship ‘PLUTO AI PC’.

Built locally for advanced AI and machine learning workloads, the high-performance computer reflects Viper’s mission to promote homegrown innovation and empower young tech talent on global platforms.

‘This is part of our commitment to backing the next generation of technology leaders,’ said Faisal Sheikh, Co-Founder and COO of Viper Technology. ‘We are honoured to support Muhammad Ayan Abdullah and showcase the strength of Pakistani talent and hardware.’

The PLUTO AI PC, developed and assembled in Pakistan, is a key part of Viper’s latest AI-focused product line—marking the country’s growing presence in competitive, global technology arenas.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Sam Altman claims OpenAI team rejecting Meta’s mega offers

Meta is intensifying efforts to recruit AI talent from OpenAI by offering signing bonuses worth up to $100 million and multi-million-pound annual salaries. However, OpenAI CEO Sam Altman claims none of the company’s top researchers have accepted the offers.

Speaking on the Uncapped podcast, Altman said Meta had approached his team with ‘giant offers’, but OpenAI’s researchers stayed loyal, believing the company has a better chance of achieving superintelligence—AI that surpasses human capabilities.

OpenAI, where the average employee reportedly earns around $1.13 million a year, fosters a mission-driven culture focused on building AI for the benefit of humanity, Altman said.

Meta, meanwhile, is assembling a 50-person Superintelligence Lab, with CEO Mark Zuckerberg personally overseeing recruitment. Bloomberg reported that offers from Meta have reached seven to nine figures in total compensation.

Despite the aggressive approach, Meta appears to be losing some of its own researchers to rivals. VC principal Deedy Das recently said Meta lost three AI researchers to OpenAI and Anthropic, even after offering over $2 million annually.

In a bid to acquire more talent, Meta has also invested $14.3 billion in Scale AI, securing a 49% stake and bringing CEO Alexandr Wang into its Superintelligence Lab leadership.

Meta says its AI assistant now reaches one billion monthly users, while OpenAI reports 500 million weekly active users globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

IBM combines watsonx and Guardium to tackle AI compliance

IBM has unveiled new software capabilities that integrate AI security and governance, claiming the industry’s first unified solution to manage the risks of agentic AI.

The enhancements merge IBM’s watsonx.governance platform—which supports oversight, transparency, and lifecycle management of AI systems—with Guardium AI Security, a tool built to protect AI models, data, and operational usage.

By unifying these tools, IBM’s solution offers enterprises the ability to oversee both governance and security across AI deployments from a single interface. It also supports compliance with 12 major frameworks, including the EU AI Act and ISO 42001.

The launch aims to address growing concerns around AI safety, regulation, and accountability as businesses scale AI-driven operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!