Google Maps launches AI-powered live lane guidance for safer driving

Google has introduced AI-powered live lane guidance for cars with Google built in, marking a significant step toward intelligent in-vehicle navigation.

A new feature that enables Google Maps to interpret roads and lanes like a driver, offering real-time audio and visual cues to help motorists make timely lane changes and avoid missed exits.

Using AI that analyses lane markings and road signs through the vehicle’s front-facing camera, Google Maps integrates the live data with its navigation system, used by over two billion people monthly. The result is more accurate guidance alongside existing traffic, ETA, and hazard updates.

The feature will debut in Polestar 4 vehicles in the US and Sweden, with plans to expand across more models and road types in collaboration with major automakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare chief warns AI is redefining the internet’s business model

AI is inserting itself between companies and customers, Cloudflare CEO Matthew Prince warned in Toronto. More people ask chatbots before visiting sites, dulling brands’ impact. Even research teams lose revenue as investors lean on AI summaries.

Frontier models devour data, pushing firms to chase exclusive sources. Cloudflare lets publishers block unpaid crawlers to reclaim control and compensation. The bigger question, said Prince, is which business model will rule an AI-mediated internet.

Policy scrutiny focuses on platforms that blend search with AI collection. Prince urged governments to separate Google’s search access from AI crawling to level the field. Countries that enforce a split could attract publishers and researchers seeking predictable rules and payment.

Licensing deals with news outlets, Reddit, and others coexist with scraping disputes and copyright suits. Google says it follows robots.txt, yet testimony indicated AI Overviews can use content blocked by robots.txt for training. Vague norms risk eroding incentives to create high-quality online content.

A practical near-term playbook combines technical and regulatory steps. Publishers should meter or block AI crawlers that do not pay. Policymakers should require transparency, consent, and compensation for high-value datasets, guiding the shift to an AI-mediated web that still rewards creators.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft deal signals pay-per-use path for AI access to People Inc. content

People Inc. has joined Microsoft’s publisher content marketplace in a pay-per-use deal that compensates media for AI access. Copilot will be the first buyer, while People Inc. continues to block most AI crawlers via Cloudflare to force paid licensing.

People Inc., formerly Dotdash Meredith, said Microsoft’s marketplace lets AI firms pay ‘à la carte’ for specific content. The agreement differs from its earlier OpenAI pact, which the company described as more ‘all-you-can-eat’, but the priority remains ‘respected and paid for’ use.

Executives disclosed a sharp fall in Google search referrals: from 54% of traffic two years ago to 24% last quarter, citing AI Overviews. Leadership argues that crawler identification and paid access should become the norm as AI sits between publishers and audiences.

Blocking non-paying bots has ‘brought almost everyone to the table’, People Inc. said, signalling more licences to come. Such an approach by Microsoft is framed as a model for compensating rights-holders while enabling AI tools to use high-quality, authorised material.

IAC reported People Inc. digital revenue up 9% to $269m, with performance marketing and licensing up 38% and 24% respectively. The publisher also acquired Feedfeed, expanding its food vertical reach while pursuing additional AI content partnerships.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Live exploitation of CVE-2024-1086 across older Linux versions flagged by CISA

CISA’s warning serves as a reminder that ransomware is not confined to Windows. A Linux kernel flaw, CVE-2024-1086, is being exploited in real-world incidents, and federal networks face a November 20 patch-or-disable deadline. Businesses should read it as their cue, too.

Attackers who reach a vulnerable host can escalate privileges to root, bypass defences, and deploy malware. Many older kernels remain in circulation even though upstream fixes were shipped in January 2024, creating a soft target when paired with phishing and lateral movement.

Practical steps matter more than labels. Patch affected kernels where possible, isolate any components that cannot be updated, and verify the running versions against vendor advisories and the NIST catalogue. Treat emergency changes as production work, with change logs and checks.

Resilience buys time when updates lag. Enforce least privilege, require MFA for admin entry points, and segment crown-jewel services. Tune EDR to spot privilege-escalation behaviour and suspicious modules, then rehearse restores from offline, immutable backups.

Security habits shape outcomes as much as CVEs. Teams that patch quickly, validate fixes, and document closure shrink the blast radius. Teams that defer kernel maintenance invite repeat visits, turning a known bug into an avoidable outage.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Q3 funding in Europe rebounds with growth rounds leading

Europe raised €13.7bn across just over 1,300 rounds in Q3, the strongest quarter since Q2 2024. September alone brought €8.7bn. July and August reflected the familiar summer slowdown.

Growth equity provided €7bn, or 51.6% of the total, with two consecutive quarters surpassing 150 growth rounds. Data centres, AI agents, and GenAI led the activity, with more AI startups scaling with larger cheques.

Early-stage totals were the lowest in 12 months, yet they were ahead of Q3 last year. Lovable’s $200 million Series A at a $1.8 billion valuation stood out. Seven new unicorns included Nscale, Fuse Energy, Framer, IQM, Nothing, and Tide.

ASML led the quarter’s largest deal, investing €1.3bn in Mistral AI’s €1.7bn Series C. France tallied €2.7 billion, heavily concentrated in Mistral, while the UK reached €4.49 billion. Germany followed with just over €1.5bn, ahead of the Netherlands and Switzerland.

AI-native funding surpassed all verticals for the first time on record, reaching €3.9 billion, with deeptech at €2.6 billion. Agentic AI logged 129 rounds, sharply higher year-over-year, while data centres edged out agents for capital. Defence and dual-use technology attracted €2.1 billion across 44 rounds.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Grammarly becomes Superhuman with unified AI tools for work

Superhuman, formerly known as Grammarly, is bundling its writing tools, workspace platform, and email client with a new AI assistant suite. The company says the rebrand reflects a push to unify generative AI features that streamline workplace tasks and online communication for subscribers.

Grammarly acquired Coda and Superhuman Mail earlier this year and added Superhuman Go. The bundle arrives as a single plan. Go’s agents brainstorm, gather information, send emails, and schedule meetings to reduce app switching.

Superhuman Mail organises inboxes and drafts replies in your voice. Coda pulls data from other apps into documents, tables, and dashboards. An upcoming update lets Coda act on that data to automate plans and tasks.

CEO Shishir Mehrotra says the aim is ambient, integrated AI. Built on Grammarly’s infrastructure, the tools work in place without prompting or pasting. The bundle targets teams seeking consistent AI across writing, email, and knowledge work.

Analysts will watch brand overlap with the existing Superhuman email app and enterprise pricing. Success depends on trust, data controls, and measurable time savings versus point tools. Rollout specifics, including regions, will follow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Spot the red flags of AI-enabled scams, says California DFPI

The California Department of Financial Protection & Innovation (DFPI) has warned that criminals are weaponising AI to scam consumers. Deepfakes, cloned voices, and slick messages mimic trusted people and exploit urgency. Learning the new warning signs cuts risk quickly.

Imposter deepfakes and romance ruses often begin with perfect profiles or familiar voices pushing you to pay or invest. Grandparent scams use cloned audio in fake emergencies; agree a family passphrase and verify on a separate channel. Influencers may flaunt fabricated credentials and followers.

Automated attacks now use AI to sidestep basic defences and steal passwords or card details. Reduce exposure with two-factor authentication, regular updates, and a reputable password manager. Pause before clicking unexpected links or attachments, even from known names.

Investment frauds increasingly tout vague ‘AI-powered’ returns while simulating growth and testimonials, then blocking withdrawals. Beware guarantees of no risk, artificial deadlines, unsolicited messages, and recruit-to-earn offers. Research independently and verify registrations before sending money.

DFPI advises careful verification before acting. Confirm identities through trusted channels, refuse to move money under pressure, and secure devices. Report suspicious activity promptly; smart habits remain the best defence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rare but real, mental health risks at ChatGPT scale

OpenAI says a small share of ChatGPT users show possible signs of mental health emergencies each week, including mania, psychosis, or suicidal thoughts. The company estimates 0.07 percent and says safety prompts are triggered. Critics argue that small percentages scale at ChatGPT’s size.

A further 0.15 percent of weekly users discuss explicit indicators of potential suicidal planning or intent. Updates aim to respond more safely and empathetically, and to flag indirect self-harm signals. Sensitive chats can be routed to safer models in a new window.

More than 170 clinicians across 60 countries advise OpenAI on risk cues and responses. Guidance focuses on encouraging users to seek real-world support. Researchers warn vulnerable people may struggle to act on on-screen warnings.

External specialists see both value and limits. AI may widen access when services are stretched, yet automated advice can mislead. Risks include reinforcing delusions and misplaced trust in authoritative-sounding output.

Legal and public scrutiny is rising after high-profile cases linked to chatbot interactions. Families and campaigners want more transparent accountability and stronger guardrails. Regulators continue to debate transparency, escalation pathways, and duty of care.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Celebrity estates push back on Sora as app surges to No.1

OpenAI’s short-video app Sora topped one million downloads in under a week, then ran headlong into a likeness-rights firestorm. Celebrity families and studios demanded stricter controls. Estates for figures like Martin Luther King Jr. sought blocks on unauthorised cameos.

Users showcased hyperreal mashups that blurred satire and deception, from cartoon crossovers to dead celebrities in improbable scenes. All clips are AI-made, yet reposting across platforms spread confusion. Viewers faced a constant real-or-fake dilemma.

Rights holders pressed for consent, compensation, and veto power over characters and personas. OpenAI shifted toward opt-in for copyrighted properties and enabled estate requests to restrict cameos. Policy language on who qualifies as a public figure remains fuzzy.

Agencies and unions amplified pressure, warning of exploitation and reputational risks. Detection firms reported a surge in takedown requests for unauthorised impersonations. Watermarks exist, but removal tools undercut provenance and complicate enforcement.

Researchers warned about a growing fog of doubt as realistic fakes multiply. Every day, people are placed in deceptive scenarios, while bad actors exploit deniability. OpenAI promised stronger guardrails as Sora scales within tighter rules.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UN cybercrime treaty signed in Hanoi amid rights concerns

Around 60 countries signed a landmark UN cybercrime convention in Hanoi, seeking faster cooperation against online crime. Leaders cited trillions in annual losses from scams, ransomware, and trafficking. The pact enters into force after 40 ratifications.

UN supporters say the treaty will streamline evidence sharing, extradition requests, and joint investigations. Provisions target phishing, ransomware, online exploitation, and hate speech. Backers frame the deal as a boost to global security.

Critics warn the text’s breadth could criminalise security research and dissent. The Cybersecurity Tech Accord called it a surveillance treaty. Activists fear expansive data sharing with weak safeguards.

The UNODC argues the agreement includes rights protections and space for legitimate research. Officials say oversight and due process remain essential. Implementation choices will decide outcomes on the ground.

The EU, Canada, and Russia signed in Hanoi, underscoring geopolitical buy-in. Vietnam, being the host, drew scrutiny over censorship and arrests. Officials there cast the treaty as a step toward resilience and stature.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!