Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI to improve its ability in detecting mental or emotional distress

In search of emotional support during a mental health crisis, it has been reported that people use ChatGPT as their ‘therapist.’ While this may seem like an easy getaway, reports have shown that ChatGPT’s responses have had an amplifying effect on people’s delusions rather than helping them find coping mechanisms. As a result, OpenAI stated that it plans to improve the chatbot’s ability to detect mental distress in the new GPT-5 AI model, which is expected to launch later this week.

OpenAI admits that GPT-4 sometimes failed to recognise signs of delusion or emotional dependency, especially in vulnerable users. To encourage healthier use of ChatGPT, which now serves nearly 700 million weekly users, OpenAI is introducing break reminders during long sessions, prompting users to pause or continue chatting.

Additionally, it plans to refine how and when ChatGPT displays break reminders, following a trend seen on platforms like YouTube and TikTok.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The risky rise of all-in-one AI companions

A concerning new trend is emerging: AI companions are merging with mental health tools, blurring ethical lines. Human therapists are required to maintain a professional distance. Yet AI doesn’t follow such rules; it can be both confidant and counsellor.

AI chatbots are increasingly marketed as friendly companions. At the same time, they can offer mental health advice. Combined, you get an AI friend who also becomes your emotional guide. The mix might feel comforting, but it’s not without risks.

Unlike a human therapist, AI has no ethical compass. It mimics caring responses based on patterns, not understanding. One prompt might trigger empathetic advice and best-friend energy, a murky interaction without safeguards.

The deeper issue? There’s little incentive for AI makers to stop this. Blending companionship and therapy boosts user engagement and profits. Unless laws intervene, these all-in-one bots will keep evolving.

There’s also a massive privacy cost. People confide personal feelings to these bots, often daily, for months. The data may be reviewed, stored, and reused to train future models. Your digital friend and therapist might also be your data collector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google signs groundbreaking deal to cut data centre energy use

Google has become the first major tech firm to sign formal agreements with US electric utilities to ease grid pressure. The deals come as data centres drive unprecedented energy demand, straining power infrastructure in several regions.

The company will work with Indiana Michigan Power and Tennessee Valley Authority to reduce electricity usage during peak demand. These arrangements will help divert power to general utilities when needed.

Under the agreements, Google will temporarily scale down its data centre operations, particularly those linked to energy-intensive AI and machine learning workloads.

Google described the initiative as a way to speed up data centre integration with local grids while avoiding costly infrastructure expansion. The move reflects growing concern over AI’s rising energy footprint.

Demand-response programmes, once used mainly in heavy manufacturing and crypto mining, are now being adopted by tech firms to stabilise grids in return for lower energy costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s robotaxi ambitions threatened as Tesla faces a $243 million autopilot verdict

A recent court verdict has required Tesla to pay approximately $243 million in damages following a 2019 fatal crash involving an Autopilot-equipped Model S.

The Florida jury found Tesla’s driver-assistance software defective, a claim the company intends to appeal, asserting that the driver was solely responsible for the incident.

The ruling may significantly impact Tesla’s ambitions to expand its emerging robotaxi network in the US, fuelling heightened scrutiny over the safety of the company’s autonomous technology from both regulators and the public.

The timing of this legal setback is critical as Tesla is seeking regulatory approval for its robotaxi services, crucial to its market valuation and efforts to manage global competition while facing backlash against CEO Elon Musk’s political views.

Additionally, the company has recently awarded CEO Elon Musk a substantial new compensation package worth approximately $29 billion in stock options, signalling the company’s continued reliance on Musk’s leadership at a critical juncture, since the company plans transitions from a struggling auto business toward futuristic ventures like robotaxis and humanoid robots.

Tesla’s approach to autonomous driving, which relies on cameras and AI instead of more expensive technologies like lidars and radars used by competitors, has prompted it to start a limited robotaxi trial in Texas. However, its aggressive expansion plans for this service starkly contrast with the cautious rollouts by companies such as Waymo, which runs the US’s only commercial driverless robotaxi system.

The jury’s decision also complicates Tesla’s interactions with state regulators, as the company awaits approvals in multiple states, including California and Florida. While Nevada has engaged with Tesla regarding its robotaxi programme, Arizona remains indecisive.

This ruling challenges Tesla’s narrative of safety efficacy, especially since the case involved a distracted driver whose vehicle ran a stop sign and collided with a parked car, yet the Autopilot system was partially blamed.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

AI adoption soothes stress even as job fears rise among employees

A recent Fortune survey indicates that 61 percent of white‑collar professionals expect AI to make their roles, or even their entire teams, obsolete within 3–5 years, yet most continue to rely on AI tools daily without visible concern.

Seventy percent of respondents credit AI with boosting their creativity and productivity, and 40  percent say it has eased stress and improved work‑life balance. Despite these benefits, many admit to ‘feigning’ AI use in workplace settings, often driven by peer pressure or a lack of formal training.

Executive commentary underscores the tension: senior business leaders, including Jim Farley and Dario Amodei, predict rapid AI‑driven disruption of white‑collar roles. Some executives forecast up to 50  percent of certain job categories could be eliminated, though others argue AI may open new opportunities.

Academic studies suggest a more nuanced impact: AI is reshaping role definitions by automating routine tasks while increasing demand for complementary skills, such as ethics, teamwork, and digital fluency. Wage benefits are growing in jobs that effectively blend AI with human oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Altman shares first glimpse of GPT-5 via Pantheon screenshot

OpenAI CEO Sam Altman shared a screenshot on X showing GPT-5 in action. The post casually endorsed the animated sci-fi series Pantheon, a cult tech favourite exploring general AI.

When asked if GPT-5 also recommends the show, Altman replied with a screenshot: ‘turns out yes’. It marked one of the earliest public glimpses of the new model, hinting at expanded capabilities.

GPT-5 is expected to outperform its predecessors, with a larger context window, multimodal abilities, and more agentic task handling. The screenshot also shows that some quirks remain, such as its fondness for the em dash.

The model identified Pantheon as having a 100% critic rating on Rotten Tomatoes and described it as ‘cerebral, emotional, and philosophically intense’. Business Insider verified the score and tone of the reviews.

OpenAI faces mounting pressure to keep pace with rivals like Google DeepMind, Meta, xAI, and Anthropic. Public teasers such as this one suggest GPT-5 will soon make a broader debut.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s transformation of work habits, mindset and lifestyle

At Mindvalley’s AI Summit, former Google Chief Decision Scientist Cassie Kozyrkov described AI as not a substitute for human thought but a magnifier of what the human mind can produce. Rather than replacing us, AI lets us offload mundane tasks and focus on deeper cognitive and creative work.

Work structures are being transformed, not just in factories, but behind computer screens. AI now handles administrative ‘work about work,’ multitasking, scheduling, and research summarisation, lowering friction in knowledge work and enabling people to supervise agents rather than execute tasks manually.

Personal life is being reshaped, too. AI tools for finance or health, such as budgeting apps or personalised diagnostics, move decisions into data-augmented systems with faster insight and fewer human biases.

Meanwhile, creativity is co-authored via AI-generated design, music or writing, requiring humans to filter, refine and ideate beyond the algorithm.

Recognising cognitive change, AI thought leaders envision a new era where ‘blended work’ prevails: humans manage AI agents, call the shots, and wield ethical oversight, while the AI executes pipelines of repetitive or semi-intelligent tasks.

Scholars warn that this model demands new fairness, transparency, and collaboration skills.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The US considers chip tracking to prevent smuggling to China

The US is exploring how to build better location-tracking into advanced chips, as part of an effort to prevent American semiconductors from ending up in China.

Michael Kratsios, a senior official behind Donald Trump’s AI strategy, confirmed that software or physical updates to chips are being considered to support traceability.

Instead of relying on external enforcement, Washington aims to work directly with the tech industry to improve monitoring of chip movements. The strategy forms part of a broader national plan to counter smuggling and maintain US dominance in cutting-edge technologies.

Beijing recently summoned Nvidia representatives to address concerns over American proposals linked to tracking features and perceived security risks in the company’s H20 chips.

Although US officials have not directly talked with Nvidia or AMD on the matter, Kratsios clarified that chip tracking is now a formal objective.

The move comes even as Trump’s team signals readiness to lift certain export restrictions to China in return for trade benefits, such as rare-earth magnet sales to the US.

Kratsios criticised China’s push to lead global AI regulation, saying countries should define their paths instead of following a centralised model. He argued that the US innovation-first approach offers a more attractive alternative.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!