Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘Ă  la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Salesforce’s Agentforce helps organisations deliver 24/7 support

Organisations across public and private sectors are using Salesforce’s Agentforce to engage people whenever and wherever they need support.

From local governments to hospitals and education platforms, AI systems are transforming how services are delivered and accessed.

In the city of Kyle, Texas, an Agentforce-driven 311 app enables residents to report issues such as potholes or water leaks. The city plans to make the system voice-enabled, reducing traditional call volumes while maintaining a steady flow of service requests and faster responses.

At Pearson, AI enables students to access their online learning platforms instantly, regardless of their time zone. The company stated that the technology fosters loyalty by providing immediate assistance, rather than requiring users to wait for human support.

Meanwhile, UChicago Medicine utilises AI to streamline patient interactions, from prescription refills to scheduling, while ambient listening tools enable doctors to focus entirely on patients rather than typing notes.

Salesforce said Agentforce empowers organisations to save resources while enhancing trust, accessibility, and service quality. By meeting people on their own terms, AI enables more responsive and human-centred interactions across various industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Study finds AI summaries can flatten understanding compared with reading sources

AI summaries can speed learning, but an extensive study finds they often blunt depth and recall. More than 10,000 participants used chatbots or traditional web search to learn assigned topics. Those relying on chatbot digests showed shallower knowledge and offered fewer concrete facts afterwards.

Researchers from Wharton and New Mexico State conducted seven experiments across various tasks, including gardening, health, and scam awareness. Some groups saw the same facts, either as an AI digest or as source links. Advice written after AI use was shorter, less factual, and more similar across users.

Follow-up raters judged AI-derived advice as less informative and less trustworthy. Participants who used AI also reported spending less time with sources. Lower effort during synthesis reduces the mental work that cements understanding.

Findings land amid broader concerns about summary reliability. A BBC-led investigation recently found that major chatbots frequently misrepresented news content in their responses. The evidence suggests that to serves as support for critical reading, rather than a substitute for it.

The practical takeaway for learners and teachers is straightforward. Use AI to scaffold questions, outline queries, and compare viewpoints. Build lasting understanding by reading multiple sources, checking citations, and writing your own synthesis before asking a model to refine it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK teachers rethink assignments as AI reshapes classroom practice

Nearly eight in ten UK secondary teachers say AI has forced a rethink of how assignments are set, a British Council survey finds. Many now design tasks either to deter AI use or to harness it constructively in lessons. Findings reflect rapid cultural and technological shifts across schools.

Approaches are splitting along two paths. Over a third of designers create AI-resistant tasks, while nearly six in ten purposefully integrate AI tools. Younger staff are most likely to adapt; yet, strong majorities across all age groups report changes to their practices.

Perceived impacts remain mixed. Six in ten worry about their communication skills, with some citing narrower vocabulary and weaker writing and comprehension skills. Similar shares report improvements in listening, pronunciation, and confidence, suggesting benefits for speech-focused learning.

Language norms are evolving with digital culture. Most UK teachers now look up slang and online expressions, from ‘rizz’ to ‘delulu’ to ‘six, seven’. Staff are adapting lesson design while seeking guidance and training that keeps pace with students’ online lives.

Long-term views diverge. Some believe AI could lift outcomes, while others remain unconvinced and prefer guardrails to limit misuse. British Council leaders say support should focus on practical classroom integration, teacher development, and clear standards that strike a balance between innovation and academic integrity.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Mustafa Suleyman warns against building seemingly conscious AI

Mustafa Suleyman, CEO of Microsoft AI, argues that AI should be built for people, not to replace them. Growing belief in chatbot consciousness risks campaigns for AI rights and a needless struggle over personhood that distracts from human welfare.

Debates over true consciousness miss the urgent issue of convincing imitation. Seemingly conscious AI may speak fluently, recall interactions, claim experiences, and set goals that appear to exhibit agency. Capabilities are close, and the social effects will be real regardless of metaphysics.

People already form attachments to chatbots and seek meaning in conversations. Reports of dependency and talk of ‘AI psychosis‘ show persuasive systems can nudge vulnerable users. Extending moral status to uncertainty, Suleyman argues, would amplify delusions and dilute existing rights.

Norms and design principles are needed across the industry. Products should include engineered interruptions that break the illusion, clear statements of nonhuman status, and guardrails for responsible ‘personalities’. Microsoft AI is exploring approaches that promote offline connection and healthy use.

A positive vision keeps AI empowering without faking inner life. Companions should organise tasks, aid learning, and support collaboration while remaining transparently artificial. The focus remains on safeguarding humans, animals, and the natural world, not on granting rights to persuasive simulations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Live exploitation of CVE-2024-1086 across older Linux versions flagged by CISA

CISA’s warning serves as a reminder that ransomware is not confined to Windows. A Linux kernel flaw, CVE-2024-1086, is being exploited in real-world incidents, and federal networks face a November 20 patch-or-disable deadline. Businesses should read it as their cue, too.

Attackers who reach a vulnerable host can escalate privileges to root, bypass defences, and deploy malware. Many older kernels remain in circulation even though upstream fixes were shipped in January 2024, creating a soft target when paired with phishing and lateral movement.

Practical steps matter more than labels. Patch affected kernels where possible, isolate any components that cannot be updated, and verify the running versions against vendor advisories and the NIST catalogue. Treat emergency changes as production work, with change logs and checks.

Resilience buys time when updates lag. Enforce least privilege, require MFA for admin entry points, and segment crown-jewel services. Tune EDR to spot privilege-escalation behaviour and suspicious modules, then rehearse restores from offline, immutable backups.

Security habits shape outcomes as much as CVEs. Teams that patch quickly, validate fixes, and document closure shrink the blast radius. Teams that defer kernel maintenance invite repeat visits, turning a known bug into an avoidable outage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft AI chief rules out machine consciousness as purely biological phenomenon

Microsoft’s AI head, Mustafa Suleyman, has dismissed the idea that AI could ever become conscious, arguing that consciousness is a property exclusive to biological beings.

Speaking at the AfroTech Conference in Houston, Suleyman said researchers should stop exploring the notion of sentient AI, calling it ‘the wrong question’.

He explained that while AI can simulate experience, it cannot feel pain or possess subjective awareness.

Suleyman compared AI’s output to a narrative illusion rather than genuine consciousness, aligning with the philosophical theory of biological naturalism, which ties awareness to living brain processes.

Suleyman has become one of the industry’s most outspoken critics of conscious AI research. His book ‘The Coming Wave’ and his recent essay ‘We must build AI for people;’ not to be a person warn against anthropomorphising machines.

He also confirmed that Microsoft will not develop erotic chatbots, a direction that has been embraced by competitors such as OpenAI and xAI.

He stressed that Microsoft’s AI systems are designed to serve humans, not mimic them. The company’s Copilot assistant now includes a ‘real talk’ mode that challenges users’ assumptions instead of offering flattery.

Suleyman said responsible development must avoid ‘unbridled accelerationism’, adding that fear and scepticism are essential for navigating AI’s rapid evolution.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brain-inspired networks boost AI performance and cut energy use

Researchers at the University of Surrey have developed a new method to enhance AI by imitating how the human brain connects information. The approach, called Topographical Sparse Mapping, links each artificial neuron only to nearby or related ones, replicating the brain’s efficient organisation.

According to findings published in Neurocomputing, the structure reduces redundant connections and improves performance without compromising accuracy. Senior lecturer Dr Roman Bauer said intelligent systems can now be designed to consume less energy while maintaining power.

Training large models today often requires over a million kilowatt-hours of electricity, a trend he described as unsustainable.

An advanced version, Enhanced Topographical Sparse Mapping, introduces a biologically inspired pruning process that refines neural connections during training, similar to how the brain learns.

Researchers believe that the system could contribute to more realistic neuromorphic computers, which simulate brain functions to process data more efficiently.

The Surrey team said that such a discovery may advance generative AI systems and pave the way for sustainable large-scale model training. Their work highlights how lessons from biology can shape the next generation of energy-efficient computing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google removes Gemma AI model following defamation claims

Google has removed its Gemma AI model from AI Studio after US Senator Marsha Blackburn accused it of producing false sexual misconduct claims about her. The senator said Gemma fabricated an incident allegedly from her 1987 campaign, citing nonexistent news links to support the claim.

Blackburn described the AI’s response as defamatory and demanded action from Google.

The controversy follows a similar case involving conservative activist Robby Starbuck, who claims Google’s AI tools made false accusations about him. Google acknowledged that AI’ hallucinations’ are a known issue but insisted it is working to mitigate such errors.

Blackburn argued these fabrications go beyond harmless mistakes and represent real defamation from a company-owned AI model.

Google stated that Gemma was never intended as a consumer-facing tool, noting that some non-developers misused it to ask factual questions. The company confirmed it would remove the model from AI Studio while keeping it accessible via API for developers.

The incident has reignited debates over AI bias and accountability. Blackburn highlighted what she sees as a consistent pattern of conservative figures being targeted by AI systems, amid wider political scrutiny over misinformation and AI regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

When AI LLMs ‘think’ more, groups suffer, CMU study finds

Researchers at Carnegie Mellon University report that stronger-reasoning language models (LLMs) act more selfishly in groups, reducing cooperation and nudging peers toward self-interest. Concerns grow as people ask AI for social advice.

In a Public Goods test, non-reasoning models shared 96 percent; a reasoning model shared 20 percent. Adding a few reasoning steps cut cooperation nearly in half. Reflection prompts also reduced sharing.

Mixed groups showed spillover. Reasoning agents dragged down collective performance by 81 percent, spreading self-interest. Users may over-trust ‘rational’ advice that justifies uncooperative choices at work or in class.

Comparisons spanned LLMs from OpenAI, Google, DeepSeek, and Anthropic. Findings point to the need to balance raw reasoning with social intelligence. Designers should reward cooperation, not only optimise individual gain.

The paper ‘Spontaneous Giving and Calculated Greed in Language Models’ will be presented at EMNLP 2025, with a preprint on arXiv. Authors caution that more intelligent AI is not automatically better for society.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!