Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The risky rise of all-in-one AI companions

A concerning new trend is emerging: AI companions are merging with mental health tools, blurring ethical lines. Human therapists are required to maintain a professional distance. Yet AI doesn’t follow such rules; it can be both confidant and counsellor.

AI chatbots are increasingly marketed as friendly companions. At the same time, they can offer mental health advice. Combined, you get an AI friend who also becomes your emotional guide. The mix might feel comforting, but it’s not without risks.

Unlike a human therapist, AI has no ethical compass. It mimics caring responses based on patterns, not understanding. One prompt might trigger empathetic advice and best-friend energy, a murky interaction without safeguards.

The deeper issue? There’s little incentive for AI makers to stop this. Blending companionship and therapy boosts user engagement and profits. Unless laws intervene, these all-in-one bots will keep evolving.

There’s also a massive privacy cost. People confide personal feelings to these bots, often daily, for months. The data may be reviewed, stored, and reused to train future models. Your digital friend and therapist might also be your data collector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google signs groundbreaking deal to cut data centre energy use

Google has become the first major tech firm to sign formal agreements with US electric utilities to ease grid pressure. The deals come as data centres drive unprecedented energy demand, straining power infrastructure in several regions.

The company will work with Indiana Michigan Power and Tennessee Valley Authority to reduce electricity usage during peak demand. These arrangements will help divert power to general utilities when needed.

Under the agreements, Google will temporarily scale down its data centre operations, particularly those linked to energy-intensive AI and machine learning workloads.

Google described the initiative as a way to speed up data centre integration with local grids while avoiding costly infrastructure expansion. The move reflects growing concern over AI’s rising energy footprint.

Demand-response programmes, once used mainly in heavy manufacturing and crypto mining, are now being adopted by tech firms to stabilise grids in return for lower energy costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US launches $100 million cybersecurity grant for states

The US government has unveiled more than $100 million in funding to help local and tribal communities strengthen their cybersecurity defences.

The announcement came jointly from the Cybersecurity and Infrastructure Security Agency (CISA) and the Federal Emergency Management Agency (FEMA), both part of the Department of Homeland Security.

Instead of a single pool, the funding is split into two distinct grants. The State and Local Cybersecurity Grant Program (SLCGP) will provide $91.7 million to 56 states and territories, while the Tribal Cybersecurity Grant Program (TCGP) allocates $12.1 million specifically for tribal governments.

These funds aim to support cybersecurity planning, exercises and service improvements.

CISA’s acting director, Madhu Gottumukkala, said the grants ensure communities have the tools needed to defend digital infrastructure and reduce cyber risks. The effort follows a significant cyberattack on St. Paul, Minnesota, which prompted a state of emergency and deployment of the National Guard.

Officials say the funding reflects a national commitment to proactive digital resilience instead of reactive crisis management. Homeland Security leaders describe the grant as both a strategic investment in critical infrastructure and a responsible use of taxpayer funds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI adoption soothes stress even as job fears rise among employees

A recent Fortune survey indicates that 61 percent of white‑collar professionals expect AI to make their roles, or even their entire teams, obsolete within 3–5 years, yet most continue to rely on AI tools daily without visible concern.

Seventy percent of respondents credit AI with boosting their creativity and productivity, and 40  percent say it has eased stress and improved work‑life balance. Despite these benefits, many admit to ‘feigning’ AI use in workplace settings, often driven by peer pressure or a lack of formal training.

Executive commentary underscores the tension: senior business leaders, including Jim Farley and Dario Amodei, predict rapid AI‑driven disruption of white‑collar roles. Some executives forecast up to 50  percent of certain job categories could be eliminated, though others argue AI may open new opportunities.

Academic studies suggest a more nuanced impact: AI is reshaping role definitions by automating routine tasks while increasing demand for complementary skills, such as ethics, teamwork, and digital fluency. Wage benefits are growing in jobs that effectively blend AI with human oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman shares first glimpse of GPT-5 via Pantheon screenshot

OpenAI CEO Sam Altman shared a screenshot on X showing GPT-5 in action. The post casually endorsed the animated sci-fi series Pantheon, a cult tech favourite exploring general AI.

When asked if GPT-5 also recommends the show, Altman replied with a screenshot: ‘turns out yes’. It marked one of the earliest public glimpses of the new model, hinting at expanded capabilities.

GPT-5 is expected to outperform its predecessors, with a larger context window, multimodal abilities, and more agentic task handling. The screenshot also shows that some quirks remain, such as its fondness for the em dash.

The model identified Pantheon as having a 100% critic rating on Rotten Tomatoes and described it as ‘cerebral, emotional, and philosophically intense’. Business Insider verified the score and tone of the reviews.

OpenAI faces mounting pressure to keep pace with rivals like Google DeepMind, Meta, xAI, and Anthropic. Public teasers such as this one suggest GPT-5 will soon make a broader debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s transformation of work habits, mindset and lifestyle

At Mindvalley’s AI Summit, former Google Chief Decision Scientist Cassie Kozyrkov described AI as not a substitute for human thought but a magnifier of what the human mind can produce. Rather than replacing us, AI lets us offload mundane tasks and focus on deeper cognitive and creative work.

Work structures are being transformed, not just in factories, but behind computer screens. AI now handles administrative ‘work about work,’ multitasking, scheduling, and research summarisation, lowering friction in knowledge work and enabling people to supervise agents rather than execute tasks manually.

Personal life is being reshaped, too. AI tools for finance or health, such as budgeting apps or personalised diagnostics, move decisions into data-augmented systems with faster insight and fewer human biases.

Meanwhile, creativity is co-authored via AI-generated design, music or writing, requiring humans to filter, refine and ideate beyond the algorithm.

Recognising cognitive change, AI thought leaders envision a new era where ‘blended work’ prevails: humans manage AI agents, call the shots, and wield ethical oversight, while the AI executes pipelines of repetitive or semi-intelligent tasks.

Scholars warn that this model demands new fairness, transparency, and collaboration skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers infiltrate Southeast Asian telecom networks

A cyber group breached telecoms across Southeast Asia, deploying advanced tracking tools instead of stealing data. Palo Alto Networks’ Unit 42 assesses the activity as ‘associated with a nation-state nexus’.

A hacking group gained covert access to telecom networks across Southeast Asia, most likely to track users’ locations, according to cybersecurity analysts at Palo Alto Networks’ Unit 42.

The campaign lasted from February to November 2024.

Instead of stealing data or directly communicating with mobile devices, the hackers deployed custom tools such as CordScan, designed to intercept mobile network protocols like SGSN. These methods suggest the attackers focused on tracking rather than data theft.

Unite42 assessed the activity ‘with high confidence’ as ‘associated with a nation state nexus’. The Unit notes that ‘this cluster heavily overlaps with activity attributed to Liminal Panda, a nation state adversary tracked by CrowdStrike’; according to CrowdStrike, Liminal Panda is considered to be a ‘likely China-nexus adversary’. It further states that ‘while this cluster significantly overlaps with Liminal Panda, we have also observed overlaps in attacker tooling with other reported groups and activity clusters, including Light Basin, UNC3886, UNC2891 and UNC1945.’

The attackers initially gained access by brute-forcing SSH credentials using login details specific to telecom equipment.

Once inside, they installed new malware, including a backdoor named NoDepDNS, which tunnels malicious data through port 53 — typically used for DNS traffic — in order to avoid detection.

To maintain stealth, the group disguised malware, altered file timestamps, disabled system security features and wiped authentication logs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US considers chip tracking to prevent smuggling to China

The US is exploring how to build better location-tracking into advanced chips, as part of an effort to prevent American semiconductors from ending up in China.

Michael Kratsios, a senior official behind Donald Trump’s AI strategy, confirmed that software or physical updates to chips are being considered to support traceability.

Instead of relying on external enforcement, Washington aims to work directly with the tech industry to improve monitoring of chip movements. The strategy forms part of a broader national plan to counter smuggling and maintain US dominance in cutting-edge technologies.

Beijing recently summoned Nvidia representatives to address concerns over American proposals linked to tracking features and perceived security risks in the company’s H20 chips.

Although US officials have not directly talked with Nvidia or AMD on the matter, Kratsios clarified that chip tracking is now a formal objective.

The move comes even as Trump’s team signals readiness to lift certain export restrictions to China in return for trade benefits, such as rare-earth magnet sales to the US.

Kratsios criticised China’s push to lead global AI regulation, saying countries should define their paths instead of following a centralised model. He argued that the US innovation-first approach offers a more attractive alternative.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!