Cloudflare chief warns AI is redefining the internet’s business model

AI is inserting itself between companies and customers, Cloudflare CEO Matthew Prince warned in Toronto. More people ask chatbots before visiting sites, dulling brands’ impact. Even research teams lose revenue as investors lean on AI summaries.

Frontier models devour data, pushing firms to chase exclusive sources. Cloudflare lets publishers block unpaid crawlers to reclaim control and compensation. The bigger question, said Prince, is which business model will rule an AI-mediated internet.

Policy scrutiny focuses on platforms that blend search with AI collection. Prince urged governments to separate Google’s search access from AI crawling to level the field. Countries that enforce a split could attract publishers and researchers seeking predictable rules and payment.

Licensing deals with news outlets, Reddit, and others coexist with scraping disputes and copyright suits. Google says it follows robots.txt, yet testimony indicated AI Overviews can use content blocked by robots.txt for training. Vague norms risk eroding incentives to create high-quality online content.

A practical near-term playbook combines technical and regulatory steps. Publishers should meter or block AI crawlers that do not pay. Policymakers should require transparency, consent, and compensation for high-value datasets, guiding the shift to an AI-mediated web that still rewards creators.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Facebook update lets admins make private groups public safely

Meta has introduced a new Facebook update allowing group administrators to change their private groups to public while keeping members’ privacy protected. The company said the feature gives admins more flexibility to grow their communities without exposing existing private content.

All posts, comments, and reactions shared before the change will remain visible only to previous members, admins, and moderators. The member list will also stay private. Once converted, any new posts will be visible to everyone, including non-Facebook users, which helps discussions reach a broader audience.

Admins have three days to review and cancel the conversion before it becomes permanent. Members will be notified when a group changes its status, and a globe icon will appear when posting in public groups as a reminder of visibility settings.

Groups can be switched back to private at any time, restoring member-only access.

Meta said the feature supports community growth and deeper engagement while maintaining privacy safeguards. Group admins can also utilise anonymous or nickname-based participation options, providing users with greater control over their engagement in public discussions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK teachers rethink assignments as AI reshapes classroom practice

Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.

Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.

Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.

Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.

Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mustafa Suleyman warns against building seemingly conscious AI

Mustafa Suleyman, CEO of Microsoft AI, argues that AI should be built for people, not to replace them. Growing belief in chatbot consciousness risks campaigns for AI rights and a needless struggle over personhood that distracts from human welfare.

Debates over true consciousness miss the urgent issue of convincing imitation. Seemingly conscious AI may speak fluently, recall interactions, claim experiences, and set goals that appear to exhibit agency. Capabilities are close, and the social effects will be real regardless of metaphysics.

People already form attachments to chatbots and seek meaning in conversations. Reports of dependency and talk of ‘AI psychosis‘ show persuasive systems can nudge vulnerable users. Extending moral status to uncertainty, Suleyman argues, would amplify delusions and dilute existing rights.

Norms and design principles are needed across the industry. Products should include engineered interruptions that break the illusion, clear statements of nonhuman status, and guardrails for responsible ‘personalities’. Microsoft AI is exploring approaches that promote offline connection and healthy use.

A positive vision keeps AI empowering without faking inner life. Companions should organise tasks, aid learning, and support collaboration while remaining transparently artificial. The focus remains on safeguarding humans, animals, and the natural world, not on granting rights to persuasive simulations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Millions turn to AI to manage finances across the UK

AI is playing an increasingly important role in personal finance, with over 28 million UK adults using AI over the past year.

Lloyds Banking Group’s latest Consumer Digital Index reveals that many individuals turn to platforms like ChatGPT for budgeting, savings planning, and financial education, reporting an average annual savings of £399 through AI insights.

Digital confidence strongly supports financial empowerment. Two-thirds of internet users report that online tools enhance their ability to manage money, while those with higher digital skills experience lower stress and greater control over their finances.

Regular engagement with AI and other digital tools enhances both knowledge and confidence in financial decision-making.

Trust remains a significant concern despite growing usage. Around 80% of users worry about inaccurate information or insufficient personalisation, emphasising the need for reliable guidance.

Jas Singh, CEO of Consumer Relationships at Lloyds, highlights that banks must combine AI innovation with trusted expertise to help people make more intelligent choices and build long-term financial resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Comet browser update puts privacy in users’ hands

Perplexity has unveiled new privacy features for its AI-powered browser, Comet, designed to give users clearer control over their data. The updates include a new homepage widget called Privacy Snapshot, which allows people to review and adjust privacy settings in one place.

The widget provides a real-time view of how Comet protects users online and simplifies settings for ad blocking, tracker management and data access. Users can toggle permissions for the Comet Assistant directly from the homepage.

Comet’s updated AI Assistant settings now show precisely how data is used, including where it is stored locally or shared for processing. Sensitive information such as passwords and payment details remain securely stored on the user’s device.

Perplexity said the changes reinforce its ‘privacy by default’ approach, an important principle in EU data protection law, combining ad blocking, safe browsing and transparent data handling. The new features are available in the latest Comet update across desktop and mobile platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australian influencer family moves to UK over child social media ban

An Australian influencer family known as the Empire Family is relocating to the UK to avoid Australia’s new social media ban for under-16s, which begins in December. The law requires major platforms to take steps preventing underage users from creating or maintaining accounts.

The family, comprising mothers Beck and Bec Lea, their 17-year-old son Prezley and 14-year-old daughter Charlotte, said the move will allow Charlotte to continue creating online content. She has hundreds of thousands of followers across YouTube, TikTok and Instagram, with all accounts managed by her parents.

Beck said they support the government’s intent to protect young people from harm but are concerned about the uncertainty surrounding enforcement methods, such as ID checks or facial recognition. She said the family wanted stability while the system is clarified.

The Australia ban, described as a world first, will apply to Facebook, Instagram, TikTok, X and YouTube. Non-compliant firms could face fines of up to A$50 million, while observers say the rules raise new privacy and data protection concerns.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google removes Gemma AI model following defamation claims

Google has removed its Gemma AI model from AI Studio after US Senator Marsha Blackburn accused it of producing false sexual misconduct claims about her. The senator said Gemma fabricated an incident allegedly from her 1987 campaign, citing nonexistent news links to support the claim.

Blackburn described the AI’s response as defamatory and demanded action from Google.

The controversy follows a similar case involving conservative activist Robby Starbuck, who claims Google’s AI tools made false accusations about him. Google acknowledged that AI’ hallucinations’ are a known issue but insisted it is working to mitigate such errors.

Blackburn argued these fabrications go beyond harmless mistakes and represent real defamation from a company-owned AI model.

Google stated that Gemma was never intended as a consumer-facing tool, noting that some non-developers misused it to ask factual questions. The company confirmed it would remove the model from AI Studio while keeping it accessible via API for developers.

The incident has reignited debates over AI bias and accountability. Blackburn highlighted what she sees as a consistent pattern of conservative figures being targeted by AI systems, amid wider political scrutiny over misinformation and AI regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When AI LLMs ‘think’ more, groups suffer, CMU study finds

Researchers at Carnegie Mellon University report that stronger-reasoning language models (LLMs) act more selfishly in groups, reducing cooperation and nudging peers toward self-interest. Concerns grow as people ask AI for social advice.

In a Public Goods test, non-reasoning models shared 96 percent; a reasoning model shared 20 percent. Adding a few reasoning steps cut cooperation nearly in half. Reflection prompts also reduced sharing.

Mixed groups showed spillover. Reasoning agents dragged down collective performance by 81 percent, spreading self-interest. Users may over-trust ‘rational’ advice that justifies uncooperative choices at work or in class.

Comparisons spanned LLMs from OpenAI, Google, DeepSeek, and Anthropic. Findings point to the need to balance raw reasoning with social intelligence. Designers should reward cooperation, not only optimise individual gain.

The paper ‘Spontaneous Giving and Calculated Greed in Language Models’ will be presented at EMNLP 2025, with a preprint on arXiv. Authors caution that more intelligent AI is not automatically better for society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australian police create AI tool to decode predators’ slang

Australian police are developing an AI tool with Microsoft to decode slang and emojis used by online predators. The technology is designed to interpret coded messages in digital conversations to help investigators detect harmful intent more quickly.

Federal Police Commissioner Krissy Barrett said social media has become a breeding ground for exploitation, bullying, and radicalisation. The AI based prototype, she explained, could allow officers to identify threats earlier and rescue children before abuse occurs.

Barrett also warned about the rise of so-called ‘crimefluencers’, offenders using social media trends to lure young victims, many of whom are pre-teen or teenage girls. Australian authorities believe understanding modern online language is key to disrupting their methods.

The initiative follows Australia’s new under-16 social media ban, due to take effect in December. Regulators worldwide are monitoring the country’s approach as governments struggle to balance online safety with privacy and digital rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot