Amazon brings Alexa+ to its Music app for conversational music discovery

Amazon has launched Alexa+ within the Amazon Music app, introducing a new era of AI-powered music discovery. The updated experience allows users to engage in natural conversations about songs, artists and genres, making music searches feel more like chatting with a knowledgeable friend.

Early Access users on iOS and Android can now explore the feature, which has already tripled user engagement compared with the original Alexa. Listeners can uncover artist influences, trace song origins, and generate playlists through dynamic, dialogue-based AI interactions.

Alexa+ creates contextually rich recommendations based on moods, activities, or cultural styles, enabling highly personalised playlists that evolve in real-time. Users can request specific vibes, such as upbeat 2010s hits or relaxed Sunday tunes, all crafted through natural language.

Amazon said Alexa+ is redefining how people connect with music by merging conversational AI with deep cultural knowledge. A full rollout is expected following the Early Access phase, with broader availability to Prime and non-Prime users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Cloudflare chief warns AI is redefining the internet’s business model

AI is inserting itself between companies and customers, Cloudflare CEO Matthew Prince warned in Toronto. More people ask chatbots before visiting sites, dulling brands’ impact. Even research teams lose revenue as investors lean on AI summaries.

Frontier models devour data, pushing firms to chase exclusive sources. Cloudflare lets publishers block unpaid crawlers to reclaim control and compensation. The bigger question, said Prince, is which business model will rule an AI-mediated internet.

Policy scrutiny focuses on platforms that blend search with AI collection. Prince urged governments to separate Google’s search access from AI crawling to level the field. Countries that enforce a split could attract publishers and researchers seeking predictable rules and payment.

Licensing deals with news outlets, Reddit, and others coexist with scraping disputes and copyright suits. Google says it follows robots.txt, yet testimony indicated AI Overviews can use content blocked by robots.txt for training. Vague norms risk eroding incentives to create high-quality online content.

A practical near-term playbook combines technical and regulatory steps. Publishers should meter or block AI crawlers that do not pay. Policymakers should require transparency, consent, and compensation for high-value datasets, guiding the shift to an AI-mediated web that still rewards creators.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Member States cooperate on next-generation European digital platforms

The European Commission has approved the creation of the Digital Commons European Digital Infrastructure Consortium (DC-EDIC), designed to strengthen Europe’s digital sovereignty. The new body unites France, Germany, the Netherlands and Italy as founding members.

DC-EDIC aims to build open, interoperable and sovereign digital systems, reducing reliance on imported technologies. Its work will focus on shared data infrastructure, connected public administration and collaborative digital tools to support both governments and businesses.

The Paris-based consortium will coordinate funding access, offer legal and technical guidance, and support the scaling of open-source digital solutions across Europe. Future projects will include a one-stop shop for resources, an expertise hub and a Digital Commons Forum.

All jointly developed software will be released under free, open-source licences, ensuring transparency and reuse whilst being GDPR compliant. The official launch is expected in December 2025, with the first annual State of the Digital Commons report planned for 2027.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Salesforce’s Agentforce helps organisations deliver 24/7 support

Organisations across public and private sectors are using Salesforce’s Agentforce to engage people whenever and wherever they need support.

From local governments to hospitals and education platforms, AI systems are transforming how services are delivered and accessed.

In the city of Kyle, Texas, an Agentforce-driven 311 app enables residents to report issues such as potholes or water leaks. The city plans to make the system voice-enabled, reducing traditional call volumes while maintaining a steady flow of service requests and faster responses.

At Pearson, AI enables students to access their online learning platforms instantly, regardless of their time zone. The company stated that the technology fosters loyalty by providing immediate assistance, rather than requiring users to wait for human support.

Meanwhile, UChicago Medicine utilises AI to streamline patient interactions, from prescription refills to scheduling, while ambient listening tools enable doctors to focus entirely on patients rather than typing notes.

Salesforce said Agentforce empowers organisations to save resources while enhancing trust, accessibility, and service quality. By meeting people on their own terms, AI enables more responsive and human-centred interactions across various industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Facebook update lets admins make private groups public safely

Meta has introduced a new Facebook update allowing group administrators to change their private groups to public while keeping members’ privacy protected. The company said the feature gives admins more flexibility to grow their communities without exposing existing private content.

All posts, comments, and reactions shared before the change will remain visible only to previous members, admins, and moderators. The member list will also stay private. Once converted, any new posts will be visible to everyone, including non-Facebook users, which helps discussions reach a broader audience.

Admins have three days to review and cancel the conversion before it becomes permanent. Members will be notified when a group changes its status, and a globe icon will appear when posting in public groups as a reminder of visibility settings.

Groups can be switched back to private at any time, restoring member-only access.

Meta said the feature supports community growth and deeper engagement while maintaining privacy safeguards. Group admins can also utilise anonymous or nickname-based participation options, providing users with greater control over their engagement in public discussions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Growing scrutiny over AI errors in professional use

Judges and employers are confronting a surge in AI-generated mistakes, from fabricated legal citations to inaccurate workplace data. Courts in the United States have already recorded hundreds of flawed filings, raising concerns about unchecked reliance on generative systems.

Experts urge professionals to treat AI as an assistant rather than an authority. Tools can support research and report writing, yet unchecked outputs often contain subtle inaccuracies that could mislead users or damage reputations.

Data scientist Damien Charlotin has identified nearly 500 court documents containing false AI-generated information within months. Even established firms have faced judicial penalties after submitting briefs with non-existent case references, underlining growing professional risks.

Workplace advisers recommend verifying AI results, protecting confidential information, and obtaining consent when using digital notetakers. Training and prompt literacy are becoming essential skills as AI tools continue shaping daily operations across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Mustafa Suleyman warns against building seemingly conscious AI

Mustafa Suleyman, CEO of Microsoft AI, argues that AI should be built for people, not to replace them. Growing belief in chatbot consciousness risks campaigns for AI rights and a needless struggle over personhood that distracts from human welfare.

Debates over true consciousness miss the urgent issue of convincing imitation. Seemingly conscious AI may speak fluently, recall interactions, claim experiences, and set goals that appear to exhibit agency. Capabilities are close, and the social effects will be real regardless of metaphysics.

People already form attachments to chatbots and seek meaning in conversations. Reports of dependency and talk of ‘AI psychosis‘ show persuasive systems can nudge vulnerable users. Extending moral status to uncertainty, Suleyman argues, would amplify delusions and dilute existing rights.

Norms and design principles are needed across the industry. Products should include engineered interruptions that break the illusion, clear statements of nonhuman status, and guardrails for responsible ‘personalities’. Microsoft AI is exploring approaches that promote offline connection and healthy use.

A positive vision keeps AI empowering without faking inner life. Companions should organise tasks, aid learning, and support collaboration while remaining transparently artificial. The focus remains on safeguarding humans, animals, and the natural world, not on granting rights to persuasive simulations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Comet browser update puts privacy in users’ hands

Perplexity has unveiled new privacy features for its AI-powered browser, Comet, designed to give users clearer control over their data. The updates include a new homepage widget called Privacy Snapshot, which allows people to review and adjust privacy settings in one place.

The widget provides a real-time view of how Comet protects users online and simplifies settings for ad blocking, tracker management and data access. Users can toggle permissions for the Comet Assistant directly from the homepage.

Comet’s updated AI Assistant settings now show precisely how data is used, including where it is stored locally or shared for processing. Sensitive information such as passwords and payment details remain securely stored on the user’s device.

Perplexity said the changes reinforce its ‘privacy by default’ approach, an important principle in EU data protection law, combining ad blocking, safe browsing and transparent data handling. The new features are available in the latest Comet update across desktop and mobile platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Live exploitation of CVE-2024-1086 across older Linux versions flagged by CISA

CISA’s warning serves as a reminder that ransomware is not confined to Windows. A Linux kernel flaw, CVE-2024-1086, is being exploited in real-world incidents, and federal networks face a November 20 patch-or-disable deadline. Businesses should read it as their cue, too.

Attackers who reach a vulnerable host can escalate privileges to root, bypass defences, and deploy malware. Many older kernels remain in circulation even though upstream fixes were shipped in January 2024, creating a soft target when paired with phishing and lateral movement.

Practical steps matter more than labels. Patch affected kernels where possible, isolate any components that cannot be updated, and verify the running versions against vendor advisories and the NIST catalogue. Treat emergency changes as production work, with change logs and checks.

Resilience buys time when updates lag. Enforce least privilege, require MFA for admin entry points, and segment crown-jewel services. Tune EDR to spot privilege-escalation behaviour and suspicious modules, then rehearse restores from offline, immutable backups.

Security habits shape outcomes as much as CVEs. Teams that patch quickly, validate fixes, and document closure shrink the blast radius. Teams that defer kernel maintenance invite repeat visits, turning a known bug into an avoidable outage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!