AI news summaries to affect the future of journalism

Generative AI tools like ChatGPT significantly impact traditional online news by reducing search traffic to media websites.

As these AI assistants summarise news content directly in search results, users are less likely to click through to the sources, threatening already struggling publishers who depend on ad revenue and subscriptions.

A Pew Research Centre study found that when AI summaries appear in search, users click suggested links half as often as in traditional search formats.

Matt Karolian of the Boston Globe Media warns that the next few years will be especially difficult for publishers, urging them to adapt or risk being ‘swept away.’

While some, like the Boston Globe, have gained a modest number of new subscribers through ChatGPT, these numbers pale compared to other traffic sources.

To adapt, publishers are turning to Generative Engine Optimisation (GEO), tailoring content so AI tools can be used and cited more effectively. Some have blocked crawlers to prevent data harvesting, while others have reopened access to retain visibility.

Legal battles are unfolding, including a major lawsuit from The New York Times against OpenAI and Microsoft. Meanwhile, licensing deals between tech giants and media organisations are beginning to take shape.

With nearly 15% of under-25s now relying on AI for news, concerns are mounting over the credibility of information. As AI reshapes how news is consumed, the survival of original journalism and public trust in it face grave uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft offers $5 million for cloud and AI vulnerabilities

Microsoft is offering security researchers up to $5 million for uncovering critical vulnerabilities in its products, with a focus on cloud and AI systems. The Zero Day Quest contest will return in spring 2026, following a $1.6 million payout in its previous edition.

Researchers are invited to submit discoveries between 4 August and 4 October 2025, targeting Azure, Copilot, M365, and other significant services. High-severity flaws are eligible for a 50% bonus payout, increasing the incentive for impactful findings.

Top participants will receive exclusive invitations to a live hacking event at Microsoft’s Redmond campus. The event promises collaboration with product teams and the Microsoft Security Response Centre.

Training from Microsoft’s AI Red Team and other internal experts will also be available. The company encourages public disclosure of patched findings to support the broader cybersecurity community.

The competition aligns with Microsoft’s Secure Future Initiative, which aims to strengthen cloud and AI security by default, design, and operation. Vulnerabilities will be disclosed transparently, even if no customer action is needed.

Full details and submission rules are available through the MSRC Researcher Portal. All reports will be subject to Microsoft’s bug bounty terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple develops smart search engine to rival ChatGPT

Apple is developing its AI-powered answer engine to rival ChatGPT, marking a strategic turn in its company’s AI approach. The move comes as Apple aims to close the gap with competitors in the fast-moving AI race.

A newly formed internal team, Answers, Knowledge and Information, is working on a tool to browse the web and deliver direct responses to users.

Led by former Siri head Robby Walker, the project is expected to expand across key Apple services, including Siri, Safari and Spotlight.

Job postings suggest Apple is recruiting talent with search engine and algorithm expertise. CEO Tim Cook has signalled Apple’s willingness to acquire companies that could speed up its AI progress.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Late-stage GenAI deals triple, Ireland sees growing interest

According to EY Ireland, global investment in generative AI surged to $49.2bn in the first half of 2025, eclipsing the full-year total for 2024. Despite a drop in deals, total value doubled year-on-year, reflecting a pivot towards more mature and revenue-focused ventures.

Average late-stage deal size has more than tripled to $1.55bn, while early and seed-stage activity has stagnated or declined. Landmark rounds from OpenAI, xAI, Anthropic, and Databricks drove much of the volume, alongside a notable $3.3bn agentic AI acquisition by Capgemini.

Ireland remains a strong adopter of AI, with 63% of startups using the technology. Yet funding gaps persist, particularly between €1m and €10m, posing challenges for growth-stage firms despite a strong local talent base.

Sprout Social’s acquisition of Irish analytics firm NewsWhip, though not part of the H1 figures, points to growing international interest in Irish AI capabilities. Meanwhile, US firms still dominate global deal value, capturing 97%, with the Middle East rising fast and Europe trailing at just 2%.

EY forecasts that sector-specific GenAI platforms, especially in cybersecurity and compliance, will become the next magnet for venture capital through late 2025 and beyond.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI to improve its ability in detecting mental or emotional distress

In search of emotional support during a mental health crisis, it has been reported that people use ChatGPT as their ‘therapist.’ While this may seem like an easy getaway, reports have shown that ChatGPT’s responses have had an amplifying effect on people’s delusions rather than helping them find coping mechanisms. As a result, OpenAI stated that it plans to improve the chatbot’s ability to detect mental distress in the new GPT-5 AI model, which is expected to launch later this week.

OpenAI admits that GPT-4 sometimes failed to recognise signs of delusion or emotional dependency, especially in vulnerable users. To encourage healthier use of ChatGPT, which now serves nearly 700 million weekly users, OpenAI is introducing break reminders during long sessions, prompting users to pause or continue chatting.

Additionally, it plans to refine how and when ChatGPT displays break reminders, following a trend seen on platforms like YouTube and TikTok.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The risky rise of all-in-one AI companions

A concerning new trend is emerging: AI companions are merging with mental health tools, blurring ethical lines. Human therapists are required to maintain a professional distance. Yet AI doesn’t follow such rules; it can be both confidant and counsellor.

AI chatbots are increasingly marketed as friendly companions. At the same time, they can offer mental health advice. Combined, you get an AI friend who also becomes your emotional guide. The mix might feel comforting, but it’s not without risks.

Unlike a human therapist, AI has no ethical compass. It mimics caring responses based on patterns, not understanding. One prompt might trigger empathetic advice and best-friend energy, a murky interaction without safeguards.

The deeper issue? There’s little incentive for AI makers to stop this. Blending companionship and therapy boosts user engagement and profits. Unless laws intervene, these all-in-one bots will keep evolving.

There’s also a massive privacy cost. People confide personal feelings to these bots, often daily, for months. The data may be reviewed, stored, and reused to train future models. Your digital friend and therapist might also be your data collector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google signs groundbreaking deal to cut data centre energy use

Google has become the first major tech firm to sign formal agreements with US electric utilities to ease grid pressure. The deals come as data centres drive unprecedented energy demand, straining power infrastructure in several regions.

The company will work with Indiana Michigan Power and Tennessee Valley Authority to reduce electricity usage during peak demand. These arrangements will help divert power to general utilities when needed.

Under the agreements, Google will temporarily scale down its data centre operations, particularly those linked to energy-intensive AI and machine learning workloads.

Google described the initiative as a way to speed up data centre integration with local grids while avoiding costly infrastructure expansion. The move reflects growing concern over AI’s rising energy footprint.

Demand-response programmes, once used mainly in heavy manufacturing and crypto mining, are now being adopted by tech firms to stabilise grids in return for lower energy costs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches ‘study mode’ to curb AI-fuelled cheating

OpenAI has introduced a new ‘study mode’ to help students use AI for learning rather than cheating. The update arrives amid a spike in academic dishonesty linked to generative AI tools.

According to The Guardian, a UK survey found nearly 7,000 confirmed cases of AI misuse during the 2023–24 academic year. Universities are under pressure to adapt assessments in response.

Under the chatbot’s Tools menu, the new mode walks users through questions with step-by-step guidance, acting more like a tutor than a solution engine.

Jayna Devani, OpenAI’s international education lead, said the aim is to foster productive use of AI. ‘It’s guiding me towards an answer, rather than just giving it to me first-hand,’ she explained.

The tool can assist with homework and exam prep and even interpret uploaded images of past papers. OpenAI cautions it may still produce errors, underscoring the need for broader conversations around AI in education.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk’s robotaxi ambitions threatened as Tesla faces a $243 million autopilot verdict

A recent court verdict has required Tesla to pay approximately $243 million in damages following a 2019 fatal crash involving an Autopilot-equipped Model S.

The Florida jury found Tesla’s driver-assistance software defective, a claim the company intends to appeal, asserting that the driver was solely responsible for the incident.

The ruling may significantly impact Tesla’s ambitions to expand its emerging robotaxi network in the US, fuelling heightened scrutiny over the safety of the company’s autonomous technology from both regulators and the public.

The timing of this legal setback is critical as Tesla is seeking regulatory approval for its robotaxi services, crucial to its market valuation and efforts to manage global competition while facing backlash against CEO Elon Musk’s political views.

Additionally, the company has recently awarded CEO Elon Musk a substantial new compensation package worth approximately $29 billion in stock options, signalling the company’s continued reliance on Musk’s leadership at a critical juncture, since the company plans transitions from a struggling auto business toward futuristic ventures like robotaxis and humanoid robots.

Tesla’s approach to autonomous driving, which relies on cameras and AI instead of more expensive technologies like lidars and radars used by competitors, has prompted it to start a limited robotaxi trial in Texas. However, its aggressive expansion plans for this service starkly contrast with the cautious rollouts by companies such as Waymo, which runs the US’s only commercial driverless robotaxi system.

The jury’s decision also complicates Tesla’s interactions with state regulators, as the company awaits approvals in multiple states, including California and Florida. While Nevada has engaged with Tesla regarding its robotaxi programme, Arizona remains indecisive.

This ruling challenges Tesla’s narrative of safety efficacy, especially since the case involved a distracted driver whose vehicle ran a stop sign and collided with a parked car, yet the Autopilot system was partially blamed.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!