Perplexity’s Comet hits Amazon’s policy wall

Amazon removed Perplexity’s Comet after receiving warnings that it was shopping without identifying itself. Perplexity says an agent inherits a user’s permissions. The fight turns a header detail into a question of who gets to intermediate online buying.

Amazon likens agents to delivery or travel intermediaries that announce themselves, and hints at blocking non-compliant bots. With its own assistant, Rufus, critics fear rules as competitive moats; Perplexity calls it gatekeeping.

Beneath this is a business-model clash. Retailers monetise discovery with ads and sponsored placement. Neutral agents promise price-first buying and fewer impulse ads. If bots dominate, incumbents lose margin and control of merchandising levers.

Interoperability likely requires standards, including explicit bot IDs, rate limits, purchase scopes, consented data access, and auditable logs. Stores could ship agent APIs for inventory, pricing, and returns, with 2FA and fraud checks for transactions.

In the near term, expect fragmentation as platforms favour native agents and restrictive terms, while regulators weigh transparency and competition. A workable truce: disclose the agent, honour robots and store policies, and use clear opt-in data contracts.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Identifying AI-generated videos on social media

AI-generated videos are flooding social media, and identifying them is becoming increasingly difficult. Low resolution or grainy footage can hint at artificial creation, though even polished clips may be deceptive.

Subtle flaws often reveal AI manipulation, including unnatural skin textures, unrealistic background movements, or odd patterns in hair and clothing. Shorter, highly compressed clips can conceal these artefacts, making detection even more challenging.

Digital literacy experts warn that traditional visual cues will soon be unreliable. Viewers should prioritise the source and context of online videos, approach content critically, and verify information through trustworthy channels.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Google Maps launches AI-powered live lane guidance for safer driving

Google has introduced AI-powered live lane guidance for cars with Google built in, marking a significant step toward intelligent in-vehicle navigation.

A new feature that enables Google Maps to interpret roads and lanes like a driver, offering real-time audio and visual cues to help motorists make timely lane changes and avoid missed exits.

Using AI that analyses lane markings and road signs through the vehicle’s front-facing camera, Google Maps integrates the live data with its navigation system, used by over two billion people monthly. The result is more accurate guidance alongside existing traffic, ETA, and hazard updates.

The feature will debut in Polestar 4 vehicles in the US and Sweden, with plans to expand across more models and road types in collaboration with major automakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare chief warns AI is redefining the internet’s business model

AI is inserting itself between companies and customers, Cloudflare CEO Matthew Prince warned in Toronto. More people ask chatbots before visiting sites, dulling brands’ impact. Even research teams lose revenue as investors lean on AI summaries.

Frontier models devour data, pushing firms to chase exclusive sources. Cloudflare lets publishers block unpaid crawlers to reclaim control and compensation. The bigger question, said Prince, is which business model will rule an AI-mediated internet.

Policy scrutiny focuses on platforms that blend search with AI collection. Prince urged governments to separate Google’s search access from AI crawling to level the field. Countries that enforce a split could attract publishers and researchers seeking predictable rules and payment.

Licensing deals with news outlets, Reddit, and others coexist with scraping disputes and copyright suits. Google says it follows robots.txt, yet testimony indicated AI Overviews can use content blocked by robots.txt for training. Vague norms risk eroding incentives to create high-quality online content.

A practical near-term playbook combines technical and regulatory steps. Publishers should meter or block AI crawlers that do not pay. Policymakers should require transparency, consent, and compensation for high-value datasets, guiding the shift to an AI-mediated web that still rewards creators.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Live exploitation of CVE-2024-1086 across older Linux versions flagged by CISA

CISA’s warning serves as a reminder that ransomware is not confined to Windows. A Linux kernel flaw, CVE-2024-1086, is being exploited in real-world incidents, and federal networks face a November 20 patch-or-disable deadline. Businesses should read it as their cue, too.

Attackers who reach a vulnerable host can escalate privileges to root, bypass defences, and deploy malware. Many older kernels remain in circulation even though upstream fixes were shipped in January 2024, creating a soft target when paired with phishing and lateral movement.

Practical steps matter more than labels. Patch affected kernels where possible, isolate any components that cannot be updated, and verify the running versions against vendor advisories and the NIST catalogue. Treat emergency changes as production work, with change logs and checks.

Resilience buys time when updates lag. Enforce least privilege, require MFA for admin entry points, and segment crown-jewel services. Tune EDR to spot privilege-escalation behaviour and suspicious modules, then rehearse restores from offline, immutable backups.

Security habits shape outcomes as much as CVEs. Teams that patch quickly, validate fixes, and document closure shrink the blast radius. Teams that defer kernel maintenance invite repeat visits, turning a known bug into an avoidable outage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Q3 funding in Europe rebounds with growth rounds leading

Europe raised €13.7bn across just over 1,300 rounds in Q3, the strongest quarter since Q2 2024. September alone brought €8.7bn. July and August reflected the familiar summer slowdown.

Growth equity provided €7bn, or 51.6% of the total, with two consecutive quarters surpassing 150 growth rounds. Data centres, AI agents, and GenAI led the activity, with more AI startups scaling with larger cheques.

Early-stage totals were the lowest in 12 months, yet they were ahead of Q3 last year. Lovable’s $200 million Series A at a $1.8 billion valuation stood out. Seven new unicorns included Nscale, Fuse Energy, Framer, IQM, Nothing, and Tide.

ASML led the quarter’s largest deal, investing €1.3bn in Mistral AI’s €1.7bn Series C. France tallied €2.7 billion, heavily concentrated in Mistral, while the UK reached €4.49 billion. Germany followed with just over €1.5bn, ahead of the Netherlands and Switzerland.

AI-native funding surpassed all verticals for the first time on record, reaching €3.9 billion, with deeptech at €2.6 billion. Agentic AI logged 129 rounds, sharply higher year-over-year, while data centres edged out agents for capital. Defence and dual-use technology attracted €2.1 billion across 44 rounds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grammarly becomes Superhuman with unified AI tools for work

Superhuman, formerly known as Grammarly, is bundling its writing tools, workspace platform, and email client with a new AI assistant suite. The company says the rebrand reflects a push to unify generative AI features that streamline workplace tasks and online communication for subscribers.

Grammarly acquired Coda and Superhuman Mail earlier this year and added Superhuman Go. The bundle arrives as a single plan. Go’s agents brainstorm, gather information, send emails, and schedule meetings to reduce app switching.

Superhuman Mail organises inboxes and drafts replies in your voice. Coda pulls data from other apps into documents, tables, and dashboards. An upcoming update lets Coda act on that data to automate plans and tasks.

CEO Shishir Mehrotra says the aim is ambient, integrated AI. Built on Grammarly’s infrastructure, the tools work in place without prompting or pasting. The bundle targets teams seeking consistent AI across writing, email, and knowledge work.

Analysts will watch brand overlap with the existing Superhuman email app and enterprise pricing. Success depends on trust, data controls, and measurable time savings versus point tools. Rollout specifics, including regions, will follow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!