Copilot Mode turns Edge into an active assistant

Edge says the browser should work with you, not just wait for clicks. Copilot Mode adds chat-first tabs, multi-tab reasoning, and a dynamic pane for in-context help. Plan trips, compare options, and generate schedules without tab chaos.

Microsoft Copilot now resumes past sessions, so projects pick up exactly where you stopped. It can execute multi-step actions, like building walking tours, end-to-end. Optional history signals improve suggestions and speed up research-heavy tasks.

Voice controls handle quick actions and deeper chores with conversational prompts. Ask Copilot to open pages, summarise threads, or unsubscribe you from promo emails. Reservations and other multi-step chores are rolling out next.

Journeys groups past browsing into topic timelines for fast re-entry, with explicit opt-in. Privacy controls are prominent: clear cues when Copilot listens, acts, or views. You can toggle Copilot Mode off anytime.

Security features round things out: local AI blocks scareware overlays by default. Built-in password tools continuously create, store, and monitor credentials. Copilot Mode is in all Copilot markets on Edge desktop and mobile and is coming soon.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australia demands answers from AI chatbot providers over child safety

Australia’s eSafety Commissioner has issued legal notices to four major AI companion platforms, requiring them to explain how they are protecting children from harmful or explicit content.

Character.ai, Nomi, Chai, and Chub.ai were all served under the country’s Online Safety Act and must demonstrate compliance with Australia’s Basic Online Safety Expectations.

The notices follow growing concern that AI companions, designed for friendship and emotional support, can expose minors to sexualised conversations, suicidal ideation, and other psychological risks.

eSafety Commissioner Julie Inman Grant said the companies must show how their systems prevent such harms, not merely react to them, warning that failure to comply could lead to penalties of up to $825,000 per day.

AI companion chatbots have surged in popularity among young users, with Character.ai alone attracting nearly 160,000 monthly active users in Australia.

The Commissioner stressed that these services must integrate safety measures by design, as new enforceable codes now extend to AI platforms that previously operated with minimal oversight.

A move that comes amid wider efforts to regulate emerging AI technologies and ensure stronger child protection standards online.

Breaches of the new codes could result in civil penalties of up to $49.5 million, marking one of the toughest online safety enforcement regimes globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alaska Airlines grounds all US flights after IT failure

Alaska Airlines temporarily grounded all US flights on Thursday following a nationwide IT outage. The carrier confirmed a technical failure had disrupted operations and imposed a ground stop while engineers worked to restore systems.

The outage also affected Horizon Air, a regional airline operated by Alaska Airlines, according to the Federal Aviation Administration. The company has not disclosed how many flights were delayed or cancelled.

Alaska Airlines, headquartered in Seattle, serves over 140 destinations across 37 states and 12 countries. Its partner, Hawaiian Airlines, remained unaffected by the disruption, which marked the carrier’s second major outage this year.

The incident comes amid wider US aviation challenges linked to staffing shortages from the ongoing government shutdown. Officials said normal flight operations were gradually resuming as systems recovered nationwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Lawmakers urge EU to curb Huawei’s role in solar inverters over security risks

Lawmakers and security officials are increasingly worried that Huawei’s dominant role in solar inverters could create a new supply-chain vulnerability for Europe’s power grids. Two MEPs have written to the European Commission urging immediate steps to limit ‘high-risk’ vendors in energy systems.

Inverters are a technology that transforms solar energy into the electrical current fed into the power network; many are internet-connected so vendors can perform remote maintenance. Cyber experts warn that remote access to large numbers of inverters could be abused to shut devices down or change settings en masse, creating surges, drops or wider instability across the grid.

Chinese firms, led by Huawei and Sungrow, supply a large share of Europe’s installed inverter capacity. SolarPower Europe estimates Chinese companies account for roughly 65 per cent of the market. Some member states are already acting: Lithuania has restricted remote access to sizeable Chinese installations, while agencies in the Czech Republic and Germany have flagged specific Huawei components for further scrutiny.

The European Commission is preparing an ICT supply-chain toolbox to de-risk critical sectors, with solar inverters listed among priority areas. Suspicion of Chinese technology has surged in recent years. Beijing, under President Xi Jinping, requires domestic firms to comply with government requests for data sharing and to report software vulnerabilities, raising Western fears of potential surveillance.

Would you like to learn more aboutAI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Amelia brings heads-up guidance to Amazon couriers

Amazon unveiled ‘Amelia’ AI-powered smart glasses for delivery drivers with a built-in display and camera, paired to a vest with a photo button, now piloting with hundreds of drivers across more than a dozen partners.

Designed for last-mile efficiency, Amelia can auto-shut down when a vehicle moves to prevent distraction, includes a hardware kill switch for the camera and mic, and aims to save about 30 minutes per 8–10-hour shift by streamlining repetitive tasks.

Initial availability is planned for the US market and the rest of North America before global expansion, with Amazon emphasizing that Amelia is custom-built for drivers, though consumer versions aren’t ruled out. Pilots involve real routes and live deliveries to customers.

Amazon also showcased a warehouse robotic arm to sort parcels faster and more safely, as well as an AI orchestration system that ingests real-time and historical data to predict bottlenecks, propose fixes, and keep fulfillment operations running smoothly.

The move joins a broader push into wearables from Big Tech. Unlike Meta’s consumer-oriented Ray-Ban smart glasses, Amelia targets enterprise use, promising faster package location, fewer taps, and tighter integration with Amazon’s delivery workflow.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces EU’s toughest platform rules after 120 million users

OpenAI’s ChatGPT could soon face the EU’s strictest platform regulations under the Digital Services Act (DSA), after surpassing 120 million monthly users in Europe.

A milestone that places OpenAI’s chatbot above the 45 million-user threshold that triggers heightened oversight.

The DSA imposes stricter obligations on major platforms such as Meta, TikTok, and Amazon, requiring greater transparency, risk assessments, and annual fees to fund EU supervision.

The European Commission confirmed it has begun assessing ChatGPT’s eligibility for the ‘very large online platform’ status, which would bring the total number of regulated platforms to 26.

OpenAI reported that its ChatGPT search function alone had 120.4 million monthly active users across the EU in the six months ending 30 September 2025. Globally, the chatbot now counts around 700 million weekly users.

If designated under the DSA, ChatGPT would be required to curb illegal and harmful content more rigorously and demonstrate how its algorithms handle information, marking the EU’s most direct regulatory test yet for generative AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU sets new rules for cloud sovereignty framework

The European Commission has launched its Cloud Sovereignty Framework to assess the independence of cloud services. The initiative defines clear criteria and scoring methods for evaluating how providers meet EU sovereignty standards.

Under the framework, the Sovereign European Assurance Level, or SEAL, will rank services by compliance. Assessments cover strategic, legal, operational, and technological aspects, aiming to strengthen data security and reduce reliance on foreign systems.

Officials say the framework will guide both public authorities and private companies in choosing secure cloud options. It also supports the EU’s broader goal of achieving technological autonomy and protecting sensitive information.

The Commission’s move follows growing concern over extra-EU data transfers and third-country surveillance. Industry observers view it as a significant step toward Europe’s ambition for trusted, sovereign digital infrastructure.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

YouTube launches likeness detection to protect creators from AI misuse

YouTube has expanded its AI safeguards with a new likeness detection system that identifies AI-generated videos imitating creators’ faces or voices. The tool is now available to eligible members of the YouTube Partner Program after a limited pilot phase.

Creators can review detected videos and request their removal under YouTube’s privacy rules or submit copyright claims.

YouTube said the feature aims to protect users from having their image used to promote products or spread misinformation without consent.

The onboarding process requires identity verification through a short selfie video and photo ID. Creators can opt out at any time, with scanning ending within a day of deactivation.

YouTube has backed recent legislative efforts, such as the NO FAKES Act in the US, which targets deceptive AI replicas. The move highlights growing industry concern over deepfake misuse and the protection of digital identity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta strengthens protection for older adults against online scams

The US giant, Meta, has intensified its campaign against online scams targeting older adults, marking Cybersecurity Awareness Month with new safety tools and global partnerships.

Additionally, Meta said it had detected and disrupted nearly eight million fraudulent accounts on Facebook and Instagram since January, many linked to organised scam centres operating across Asia and the Middle East.

The social media giant is joining the National Elder Fraud Coordination Center in the US, alongside partners including Google, Microsoft and Walmart, to strengthen investigations into large-scale fraud operations.

It is also collaborating with law enforcement and research groups such as Graphika to identify scams involving fake customer service pages, fraudulent financial recovery services and deceptive home renovation schemes.

Meta continues to roll out product updates to improve online safety. WhatsApp now warns users when they share screens with unknown contacts, while Messenger is testing AI-powered scam detection that alerts users to suspicious messages.

Across Facebook, Instagram and WhatsApp, users can activate passkeys and complete a Security Checkup to reinforce account protection.

The company has also partnered with organisations worldwide to raise scam awareness among older adults, from digital literacy workshops in Bangkok to influencer-led safety campaigns across Europe and India.

These efforts form part of Meta’s ongoing drive to protect users through a mix of education, advanced technology and cross-industry cooperation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Judge bars NSO Group from using spyware to target WhatsApp in landmark ruling

A US federal judge has permanently barred NSO Group, a commercial spyware company, from targeting WhatsApp and, in the same ruling, cut damages owed to Meta from $168 million to $4 million.

The decision by Judge Phyllis Hamilton of the Northern District of California stems from NSO’s 2019 hack of WhatsApp, when the company’s Pegasus spyware targeted 1,400 users through a zero-click exploit. The injunction bans NSO from accessing or assisting access to WhatsApp’s systems, a restriction the firm previously warned could threaten its business model.

An NSO spokesperson said the order ‘will not apply to NSO’s customers, who will continue using the company’s technology to help protect public safety,’ but declined to clarify how that interpretation aligns with the court’s wording. By contrast, Will Cathcart, head of WhatsApp, stated on X that the decision ‘bans spyware maker NSO from ever targeting WhatsApp and our global users again.’

Pegasus has allegedly been used against journalists, activists, and dissidents worldwide. The ruling sets an important precedent for US companies whose platforms have been compromised by commercial surveillance firms.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!