Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s robotaxi ambitions threatened as Tesla faces a $243 million autopilot verdict

A recent court verdict has required Tesla to pay approximately $243 million in damages following a 2019 fatal crash involving an Autopilot-equipped Model S.

The Florida jury found Tesla’s driver-assistance software defective, a claim the company intends to appeal, asserting that the driver was solely responsible for the incident.

The ruling may significantly impact Tesla’s ambitions to expand its emerging robotaxi network in the US, fuelling heightened scrutiny over the safety of the company’s autonomous technology from both regulators and the public.

The timing of this legal setback is critical as Tesla is seeking regulatory approval for its robotaxi services, crucial to its market valuation and efforts to manage global competition while facing backlash against CEO Elon Musk’s political views.

Additionally, the company has recently awarded CEO Elon Musk a substantial new compensation package worth approximately $29 billion in stock options, signalling the company’s continued reliance on Musk’s leadership at a critical juncture, since the company plans transitions from a struggling auto business toward futuristic ventures like robotaxis and humanoid robots.

Tesla’s approach to autonomous driving, which relies on cameras and AI instead of more expensive technologies like lidars and radars used by competitors, has prompted it to start a limited robotaxi trial in Texas. However, its aggressive expansion plans for this service starkly contrast with the cautious rollouts by companies such as Waymo, which runs the US’s only commercial driverless robotaxi system.

The jury’s decision also complicates Tesla’s interactions with state regulators, as the company awaits approvals in multiple states, including California and Florida. While Nevada has engaged with Tesla regarding its robotaxi programme, Arizona remains indecisive.

This ruling challenges Tesla’s narrative of safety efficacy, especially since the case involved a distracted driver whose vehicle ran a stop sign and collided with a parked car, yet the Autopilot system was partially blamed.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The US launches $100 million cybersecurity grant for states

The US government has unveiled more than $100 million in funding to help local and tribal communities strengthen their cybersecurity defences.

The announcement came jointly from the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Emergency Management Agency (FEMA), both part of the Department of Homeland Security.

Instead of a single pool, the funding is split into two distinct grants. The State and Local Cybersecurity Grant Program (SLCGP) will provide $91.7 million to 56 states and territories, while the Tribal Cybersecurity Grant Program (TCGP) allocates $12.1 million specifically for tribal governments.

These funds aim to support cybersecurity planning, exercises and service improvements.

CISA’s acting director, Madhu Gottumukkala, said the grants ensure communities have the tools needed to defend digital infrastructure and reduce cyber risks. The effort follows a significant cyberattack on St. Paul, Minnesota, which prompted a state of emergency and deployment of the National Guard.

Officials say the funding reflects a national commitment to proactive digital resilience instead of reactive crisis management. Homeland Security leaders describe the grant as both a strategic investment in critical infrastructure and a responsible use of taxpayer funds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US considers chip tracking to prevent smuggling to China

The US is exploring how to build better location-tracking into advanced chips, as part of an effort to prevent American semiconductors from ending up in China.

Michael Kratsios, a senior official behind Donald Trump’s AI strategy, confirmed that software or physical updates to chips are being considered to support traceability.

Instead of relying on external enforcement, Washington aims to work directly with the tech industry to improve monitoring of chip movements. The strategy forms part of a broader national plan to counter smuggling and maintain US dominance in cutting-edge technologies.

Beijing recently summoned Nvidia representatives to address concerns over American proposals linked to tracking features and perceived security risks in the company’s H20 chips.

Although US officials have not directly talked with Nvidia or AMD on the matter, Kratsios clarified that chip tracking is now a formal objective.

The move comes even as Trump’s team signals readiness to lift certain export restrictions to China in return for trade benefits, such as rare-earth magnet sales to the US.

Kratsios criticised China’s push to lead global AI regulation, saying countries should define their paths instead of following a centralised model. He argued that the US innovation-first approach offers a more attractive alternative.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eswatini advances digital vision with new laws, 5G and skills training

Eswatini is moving forward with a national digital transformation plan focused on infrastructure, legislation and skills development.

The country’s Minister of ICT, Savannah Maziya, outlined key milestones during the 2025 Eswatini Economic Update, co-hosted with the World Bank.

In her remarks, Maziya said that digital technology plays a central role in job creation, governance and economic development. She introduced several regulatory frameworks, including a Cybersecurity Bill, a Critical Infrastructure Bill and an E-Commerce Strategy.

Additional legislation is planned for emerging technologies such as AI, robotics and satellite systems.

Infrastructure improvements include the nationwide expansion of fibre optic networks and a rise in international connectivity capacity from 47 Gbps to 72 Gbps.

Mbabane, the capital, is being developed as a Smart City with 5G coverage, AI-enabled surveillance and public Wi-Fi access.

The Ministry of ICT has launched more than 11 digital public services and plans to add 90 more in the next three years.

A nationwide coding initiative will offer digital skills training to over 300,000 citizens, supporting wider efforts to increase access and participation in the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creative industries raise concerns over the EU AI Act

Organisations representing creative sectors have issued a joint statement expressing concerns over the current implementation of the EU AI Act, particularly its provisions for general-purpose AI systems.

The response focuses on recent documents, including the General Purpose AI Code of Practice, accompanying guidelines, and the template for training data disclosure under Article 53.

The signatories, drawn from music and broader creative industries, said they had engaged extensively throughout the consultation process. They now argue that the outcomes do not fully reflect the issues raised during those discussions.

According to the statement, the result does not provide the level of intellectual property protection that some had expected from the regulation.

The group has called on the European Commission to reconsider the implementation package and is encouraging the European Parliament and member states to review the process.

The original EU AI Act was widely acknowledged as a landmark regulation, with technology firms and creative industries closely watching its rollout across member countries.

Google confirmed that it will sign the General Purpose Code of Practice elsewhere. The company said the latest version supports Europe’s broader innovation goals more effectively than earlier drafts, but it also noted ongoing concerns.

These include the potential impact of specific requirements on competitiveness and handling trade secrets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US federal appeals court renews scrutiny in child exploitation suit against Musk’s X

A federal appeals court in San Francisco has reinstated critical parts of a lawsuit against Elon Musk’s social media platform X, previously known as Twitter, regarding child exploitation content. 

While recognising that X holds significant legal protections against liability for content posted by users, the 9th Circuit panel determined that the platform must address allegations of negligence stemming from delays in reporting explicit material involving minors to authorities.

The troubling case revolves around two minors who were tricked via SnapChat into providing explicit images, which were later compiled and widely disseminated on Twitter. 

Despite being alerted to the content, Twitter reportedly took nine days to remove it and notify the National Center for Missing and Exploited Children, during which the disturbing video received over 167,000 views. 

The court emphasised that once the platform was informed, it had a clear responsibility to act swiftly, separating this obligation from typical protections granted by the Communications Decency Act.

The ruling additionally criticised X for having an infrastructure that allegedly impeded users’ ability to report child exploitation effectively. 

However, the court upheld the dismissal of other claims, including allegations that Twitter knowingly benefited from sex trafficking or deliberately amplified illicit content. 

Advocates for the victims welcomed the decision as a step toward accountability, setting the stage for further legal scrutiny and potential trial proceedings.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK Online Safety Act under fire amid free speech and privacy concerns

The UK’s Online Safety Act, aimed at protecting children and eliminating illegal content online, is stirring a strong debate due to its stringent requirements on social media platforms and websites hosting adult content.

Critics argue that the act’s broad application could unintentionally suppress free speech, as highlighted by social media platform X.

X claims the act results in the censorship of lawful content, reflecting concerns shared by politicians, free-speech campaigners, and content creators.

Moreover, public unease is evident, with over 468,000 individuals signing a petition for the act’s repeal, citing privacy concerns over mandatory age checks requiring personal data on adult content sites.

Despite mounting criticism, the UK government is resolute in its commitment to the legislation. Technology Secretary Peter Kyle equates opposition to siding with online predators, emphasising child protection.

The government asserts that the act also mandates platforms to uphold freedom of expression alongside child safety obligations.

While X criticises both the broad scope and the tight compliance timelines of the act, warning of pressures towards over-censorship, it calls for significant statutory revisions to protect personal freedoms while safeguarding children.

The government rebuffs claims that the Online Safety Act compromises free speech, with assurances that the law equally protects freedom of expression.

Meanwhile, Ofcom, the UK’s communications regulator, has initiated investigations into the compliance of several companies managing pornography sites, highlighting the rigorous enforcement.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

US court mandates Android app competition, loosens billing rules

Long-standing dominance over Android app distribution has been declared illegal by the Ninth Circuit Court of Appeals, reinforcing a prior jury verdict in favour of Epic Games. Google now faces an injunction that compels it to allow rival app stores and alternative billing systems inside the Google Play Store ecosystem for a three-year period ending November 2027.

A technical committee jointly selected by Epic and Google will oversee sensitive implementation tasks, including granting competitors approved access to Google’s expansive app catalogue while ensuring minimal security risk. The order also requires that developers not be tied to Google’s billing system for in-app purchases.

Market analysts warn that reduced dependency on Play Store exclusivity and the option to use alternative payment processors could cut Google’s app revenue by as much as $1 to $1.5 billion annually. Despite brand recognition, developers and consumers may shift toward lower-cost alternatives competing on platform flexibility.

While the ruling aims to restore competition, Google maintains it is appealing and has requested additional delays to avoid rapid structural changes. Proponents, including Microsoft, regulators, and Epic Games, hail the decision as a landmark step toward fairer mobile market access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Delta’s personalised flight costs under scrutiny

Delta Air Lines’ recent revelation about using AI to price some airfares is drawing significant criticism. The airline aims to increase AI-influenced pricing to 20 per cent of its domestic flights by late 2025.

While Delta’s president, Glen Hauenstein, noted positive results from their Fetcherr-supplied AI tool, industry observers and senators are voicing concerns. Critics worry that AI-driven pricing, similar to rideshare surge models, could lead to increased fares for travellers and raise serious data privacy issues.

Senators like Ruben Gallego, Mark Warner, and Richard Blumenthal, highlighted fears that ‘surveillance pricing’ could utilise extensive personal data to estimate a passenger’s willingness to pay.

Despite Delta’s spokesperson denying individualised pricing based on personal information, AI experts suggest factors like device type and Browse behaviour are likely influencing prices, making them ‘deeply personalised’.

Different travellers could be affected unevenly. Bargain hunters with flexible dates might benefit, but business travellers and last-minute bookers may face higher costs. Other airlines like Virgin Atlantic also use Fetcherr’s technology, indicating a wider industry trend.

Pricing experts like Philip Carls warn that passengers won’t know if they’re getting a fair deal, and proving discrimination, even if unintended by AI, could be almost impossible.

American Airlines’ CEO, Robert Isom, has publicly criticised Delta’s move, stating American won’t copy the practice, though past incidents show airlines can adjust fares based on booking data even without AI.

With dynamic pricing technology already permitted, experts anticipate lawmakers will soon scrutinise AI’s role more closely, potentially leading to new transparency mandates.

For now, travellers can try strategies like using incognito mode, clearing cookies, or employing a VPN to obscure their digital footprint and potentially avoid higher AI-driven fares.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!