Cloudflare’s new tool lets publishers charge AI crawlers

Cloudflare, which powers 20% of the web, has launched a new marketplace called Pay per Crawl, aiming to redefine how website owners interact with AI companies.

The platform allows publishers to set a price for AI crawlers to access their content instead of allowing unrestricted scraping or blocking. Website owners can decide to charge a micropayment for each crawl, permit free access, or block crawlers altogether, gaining more control over their material.

Over the past year, Cloudflare introduced tools for publishers to monitor and block AI crawlers, laying the groundwork for the marketplace. Major publishers like Conde Nast, TIME and The Associated Press have joined Cloudflare in blocking AI crawlers by default, supporting a permission-based approach.

The company also now blocks AI bots by default on all new sites, requiring site owners to grant access.

Cloudflare’s data reveals that AI crawlers scrape websites far more aggressively than traditional search engines, often without sending equivalent referral traffic. For example, OpenAI’s crawler scraped sites 1,700 times for every referral, compared to Google’s 14 times.

As AI agents evolve to gather and deliver information directly, it raises challenges for publishers who rely on site visits for revenue.

Pay per Crawl could offer a new business model for publishers in an AI-driven world. Cloudflare envisions a future where AI agents operate with a budget to access quality content programmatically, helping users synthesise information from trusted sources.

For now, both publishers and AI companies need Cloudflare accounts to set crawl rates, with Cloudflare managing payments. The company is also exploring stablecoins as a possible payment method in the future.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Qantas cyber attack sparks customer alert

Qantas is investigating a major data breach that may have exposed the personal details of up to six million customers.

The breach affected a third-party platform used by the airline’s contact centre to store sensitive data, including names, phone numbers, email addresses, dates of birth and frequent flyer numbers.

The airline discovered unusual activity on 30 June and responded by immediately isolating the affected system. While the full scope of the breach is still being assessed, Qantas expects the volume of stolen data to be significant.

However, it confirmed that no passwords, PINs, credit card details or passport numbers were stored on the compromised platform.

Qantas has informed the Australian Federal Police, the Cyber Security Centre and the Office of the Information Commissioner. CEO Vanessa Hudson apologised to customers and urged anyone concerned to call a dedicated support line. She added that airline operations and safety remain unaffected.

The incident follows recent cyber attacks on Hawaiian Airlines, WestJet and major UK retailers, reportedly linked to a group known as Scattered Spider. The breach adds to a growing list of Australian organisations targeted in 2025, in what privacy authorities describe as a worsening trend.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Springer machine learning book faces fake citation scandal

A Springer Nature book on machine learning has come under scrutiny after researchers discovered that many of its citations were fabricated or erroneous.

A review of 18 citations in Mastering Machine Learning: From Basics to Advanced revealed that two-thirds either referenced nonexistent papers or misattributed authorship and publication sources.

Several academics whose names were included in the book confirmed they did not write the cited material, while others noted inaccuracies in where their actual work was supposedly published. One researcher was alerted by Google Scholar to multiple fake citations under his name.

Govindakumar Madhavan, the author, has not confirmed whether AI tools were used in producing the content, though his book discusses ethical concerns around AI-generated text.

Springer Nature has acknowledged the issue and is investigating whether the book breached its AI use policies, which require authors to declare AI involvement beyond basic editing.

The incident has reignited concerns about publishers’ quality control, with critics pointing to the increasing misuse of large language models in academic texts. As AI tools become more advanced, ensuring the integrity of published research remains a growing challenge for both authors and editors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tinder trials face scans to verify profiles

Tinder is trialling a facial recognition feature to boost user security and crack down on fraudulent profiles. The pilot is currently underway in the US, after initial launches in Colombia and Canada.

New users are now required to take a short video selfie during sign-up, which will be matched against profile photos to confirm authenticity. The app also compares the scan with other accounts to catch duplicates and impersonations.

Verified users receive a profile badge, and Tinder stores a non-reversible encrypted face map to aid in detection. The company claims all facial data is deleted when accounts are removed.

The update follows a sharp rise in catfishing and romance scams, with over 64,000 cases reported in the US last year alone. Other measures introduced in recent years include photo verification, ID checks and location-sharing tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s Facebook uses phone photos for AI if users allow it

Meta has introduced a new feature that allows Facebook to access and analyse users’ photos stored on their phones, provided they give explicit permission.

The move is part of a broader push to improve the company’s AI tools, especially after the underwhelming reception of its Llama 4 model. Users who opt in will be agreeing to Meta’s AI Terms of Service, which grants the platform the right to retain and use personal media for content suggestions.

The new feature, currently being tested in the US and Canada, is designed to offer Facebook users creative ideas for Stories by processing their photos and videos through cloud infrastructure.

When enabled, users may receive suggestions such as collages or travel highlights based on when and where images were captured, as well as who or what appears in them. However, participation is strictly optional and can be turned off at any time.

Facebook clarifies that the media analysed under the feature is not used to train AI models in the current test. Still, the system does upload selected media to Meta’s servers on an ongoing basis, raising privacy concerns.

The option to activate these suggestions can be found in the Facebook app’s settings, where users are asked whether they want camera roll data to inform sharing ideas.

Meta has been actively promoting its AI ambitions, with CEO Mark Zuckerberg pushing for the development of ‘superintelligence’. The company recently launched Meta Superintelligence Labs to lead these efforts.

Despite facing stiff competition from OpenAI, DeepSeek and Google, Meta appears determined to deepen its use of personal data to boost its AI capabilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenInfra Summit Europe brings focus on AI and VMware alternatives

The OpenInfra Foundation and its global community will gather at the OpenInfra Summit Europe from 17 to 19 October in Paris-Saclay to explore how open source is reshaping digital infrastructure.

It will be the first summit since the Foundation joined the Linux Foundation, uniting major projects such as Linux, Kubernetes and OpenStack under the OpenInfra Blueprint. The agenda includes a strong focus on digital sovereignty, VMware migration strategies and infrastructure support for AI workloads.

Taking place at École Polytechnique in Palaiseau, the summit arrives at a time when open source software is powering nearly $9 trillion of economic activity.

With over 38% of the global OpenInfra community based in Europe, the event will focus on regional priorities like data control, security, and compliance with new EU regulations such as the Cyber Resilience Act.

Developers, IT leaders and business strategists will explore how projects like Kata Containers, Ceph and RISC-V integrate to support cost-effective, scalable infrastructure.

The summit will also mark OpenStack’s 15th anniversary, with use cases shared by the UN, BMW and nonprofit Restos du Coeur.

Attendees will witness a live VMware migration demo featuring companies like Canonical and Rackspace, highlighting real-world approaches to transitioning away from proprietary platforms. Sessions will dive into topics like CI pipelines, AI-powered infrastructure, and cloud-native operations.

As a community-led event, OpenInfra Summit Europe remains focused on collaboration.

With sponsors including Canonical, Mirantis, Red Hat and others, the gathering offers developers and organisations an opportunity to share best practices, shape open source development, and strengthen the global infrastructure ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI training with pirated books triggers massive legal risk

A US court has ruled that AI company Anthropic engaged in copyright infringement by downloading millions of pirated books to train its language model, Claude.

Although the court found that using copyrighted material for AI training could qualify as ‘fair use’ under US law when the content is transformed, it also held that acquiring the content illegally instead of licensing it lawfully constituted theft.

Judge William Alsup described AI as one of the most transformative technologies of our time. Still, he stated that Anthropic obtained millions of digital books from pirate sites such as LibGen and Pirate Library Mirror.

He noted that buying the same books later in print form does not erase the initial violation, though it may reduce potential damages.

The penalties for wilful copyright infringement in the US could reach up to $150,000 per work, meaning total compensation might run into the billions.

The case highlights the fine line between transformation and theft and signals growing legal pressure on AI firms to respect intellectual property instead of bypassing established licensing frameworks.

Australia, which uses a ‘fair dealing’ system rather than ‘fair use’, already offers flexible licensing schemes through organisations like the Copyright Agency.

CEO Josephine Johnston urged policymakers not to weaken Australia’s legal framework in favour of global tech companies, arguing that licensing provides certainty for developers and fair payment to content creators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New NHS plan adds AI to protect patient safety

The NHS is set to introduce a world-first AI system to detect patient safety risks early by analysing hospital data for warning signs of deaths, injuries, or abuse.

Instead of waiting for patterns to emerge through traditional oversight, the AI will use near real-time data to trigger alerts and launch rapid inspections.

Health Secretary Wes Streeting announced that a new maternity-focused AI tool will roll out across NHS trusts in November. It will monitor stillbirths, brain injuries and death rates, helping identify issues before they become scandals.

The initiative forms part of a new 10-year plan to modernise the health service and move it from analogue to digital care.

The technology will send alerts to the Care Quality Commission, whose teams will investigate flagged cases. Professor Meghana Pandit, NHS England’s medical director, said the UK would become the first country to trial this AI-enabled early warning system to improve patient care.

CQC chief Sir Julian Hartley added it would strengthen quality monitoring across services.

However, nursing leaders voiced concerns that AI could distract from more urgent needs. Professor Nicola Ranger of the Royal College of Nursing warned that low staffing levels remain a critical issue.

She stressed that one nurse often handles too many patients, and technology should not replace the essential investment in frontline staff.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI governance through the lens of magical realism

AI today straddles the line between the extraordinary and the mundane, a duality that evokes the spirit of magical realism—a literary genre where the impossible blends seamlessly with the real. Speaking at the 20th Internet Governance Forum (IGF) in Lillestrøm, Norway, Jovan Kurbalija proposed that we might better understand the complexities of AI governance by viewing it through this narrative lens.

Like Gabriel García Márquez’s floating characters or Salman Rushdie’s prophetic protagonists, AI’s remarkable feats—writing novels, generating art, mimicking human conversation—are increasingly accepted without question, despite their inherent strangeness.

Kurbalija argues that AI, much like the supernatural in literature, doesn’t merely entertain; it reveals and shapes profound societal realities. Algorithms quietly influence politics, reshape economies, and even redefine relationships.

Just as magical realism uses the extraordinary to comment on power, identity, and truth, AI forces us to confront new ethical dilemmas: Who owns AI-created content? Can consent be meaningfully given to machines? And does predictive technology amplify societal biases?

The risks of AI—job displacement, misinformation, surveillance—are akin to the symbolic storms of magical realism: always present, always shaping the backdrop. Governance, then, must walk a fine line between stifling innovation and allowing unchecked technological enchantment.

Kurbalija warns against ‘black magic’ policy manipulation cloaked in humanitarian language and urges regulators to focus on real-world impacts while resisting the temptation of speculative fears. Ultimately, AI isn’t science fiction—it’s magical realism in motion.

As we build policies and frameworks to govern it, we must ensure this magic serves humanity, rather than distort our sense of what is real, ethical, and just. In this unfolding story, the challenge is not only technological, but deeply human.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cybercrime surge hits airlines across North America

According to the FBI and cybersecurity experts, a well-known cybercrime group has launched fresh attacks on the airline industry, successfully breaching the networks of several airlines in the US and Canada.

The hackers, identified as ‘Scattered Spider’, are known for aggressive extortion tactics and are now shifting their focus to aviation instead of insurance or retail, their previous targets.

Airline security teams remain on high alert despite no flights or operations being disrupted. Hawaiian Airlines and Canada’s WestJet have acknowledged recent cyber incidents, while sources suggest more affected companies may step forward soon.

Both airlines reported no impact on day-to-day services, likely due to solid internal defences and continuity planning.

The attackers often exploit help desks by impersonating employees or customers to access corporate systems. Experts warn that airline call centres are especially vulnerable, given their importance to customer support.

Cybersecurity firms, including Mandiant, are now supporting the response and advising firms to reinforce these high-risk entry points.

Scattered Spider has previously breached major casinos, insurance, and retail companies. The FBI confirmed it is working with aviation partners to contain the threat and assist victims.

Industry leaders remain alert, noting that airlines, IT contractors, and vendors across the aviation sector are at risk from the escalating threat.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!