Four new Echo devices debut with Amazon’s next-gen Alexa+

Amazon has unveiled four new Echo devices powered by Alexa+, its next-generation AI assistant. The lineup includes Echo Dot Max, Echo Studio, Echo Show 8, and Echo Show 11, all designed for personalised, ambient AI-driven experiences. Buyers will automatically gain access to Alexa+.

At the core are the new AZ3 and AZ3 Pro chips, which feature AI accelerators, powering advanced models for speech, vision, and ambient interaction. The Echo Dot Max, priced at $99.99, features a two-speaker system with triple the bass, while the Echo Studio, priced at $219.99, adds spatial audio and Dolby Atmos.

The Echo Show 8 and Echo Show 11 introduce HD displays, enhanced audio, and intelligent sensing capabilities. Both feature 13-megapixel cameras that adapt to lighting and personalise interactions. The Echo Show 8 will cost $179.99, while the Echo Show 11 is priced at $219.99.

Beyond hardware, Alexa+ brings deeper conversational skills and more intelligent daily support, spanning home organisation, entertainment, health, wellness, and shopping. Amazon also introduced the Alexa+ Store, a platform for discovering third-party services and integrations.

The Echo Dot Max and Echo Studio will launch on October 29, while the Echo Show 8 and Echo Show 11 arrive on November 12. Amazon positions the new portfolio as a leap toward making ambient AI experiences central to everyday living.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UK users lose access to Imgur amid watchdog probe

Imgur has cut off access for UK users after regulators warned its parent company, MediaLab AI, of a potential fine over child data protection.

Visitors to the platform since 30 September have been met with a notice saying that content is unavailable in their region, with embedded Imgur images on other sites also no longer visible.

The UK’s Information Commissioner’s Office (ICO) began investigating the platform in March, questioning whether it complied with data laws and the Children’s Code.

The regulator said it had issued MediaLab with a notice of intent to fine the company following provisional findings. Officials also emphasised that leaving the UK would not shield Imgur from responsibility for any past breaches.

Some users speculated that the withdrawal was tied to new duties under the Online Safety Act, which requires platforms to check whether visitors are over 18 before allowing access to harmful content.

However, both the ICO and Ofcom stated that Imgur decided on a commercial choice. Other MediaLab services, such as Kik Messenger, continue to operate in the UK with age verification measures in place.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches Instant Checkout to enable in-chat purchases

OpenAI has launched Instant Checkout, a feature that lets users make direct purchases within ChatGPT. The initial rollout applies to US Etsy sellers, with Shopify merchants to follow.

The system is powered by the Agentic Commerce Protocol, which OpenAI co-developed with Stripe, and currently supports single-item purchases. Future updates will add multi-item carts and expand to more regions.

According to OpenAI, product results in ChatGPT are organic and ranked for relevance. The e-commerce framework will be open-sourced to accelerate integrations for merchants and developers. Users can pay using cards already on file, and transactions involve explicit confirmation steps, scoped payment tokens, and limited data sharing to build trust.

Michelle Fradin, OpenAI’s product lead for ChatGPT commerce, said the goal is to move beyond information retrieval and support real-world actions. Stripe’s president for technology and business, Will Gaybrick, described the partnership as laying economic infrastructure for AI.

Merchants will pay a small fee on completed purchases, while users are not charged extra and product prices remain unchanged.

Reuters reported that Etsy and Shopify’s stocks rose significantly following the announcement, with Etsy closing up nearly 16 percent and Shopify more than 6 percent. The company plans to extend the system to more merchants and payment types over time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Gen Z most vulnerable to phishing scams

A global survey commissioned by Yubico suggests that younger workers are more vulnerable to phishing scams than older generations. Gen Z respondents reported the highest level of interaction with phishing messages, with 62 percent admitting they engaged with a scam in the past year.

The study gathered responses from 18,000 employed adults in nine countries, including the UK, US, France, and Japan. In the past twelve months, 44 percent of participants admitted to clicking on or replying to a phishing message.

AI is raising the stakes for cybersecurity. Seventy percent of those surveyed believe phishing has become more effective due to AI, and 78 percent said the attacks seem more sophisticated. More than half could not confidently identify a phishing email when shown one.

Despite growing risks, cyber defences remain patchy. Only 48 percent said their workplace used multi-factor authentication across all services, and 40 percent reported never receiving cybersecurity training from their employer.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Rising stress leaves cyber professionals at breaking point

Burnout is a significant challenge in the cybersecurity sector, as workers face rising threats and constant pressure to defend organisations. A BBC report highlights how professionals often feel overworked and undervalued, with stress levels leading some to take extended leave.

UK-based surveys reflect growing strain. Membership body ISC2 found that job satisfaction among cybersecurity staff dropped in 2024, with burnout cited as a key issue. Experts say demands have increased while resources remain stretched, leaving staff expected to stay on call around the clock.

Hackers are becoming more aggressive, targeting health services, retailers, and critical national infrastructure. Nation-state actors, including North Korean groups linked to large crypto thefts, are also stepping up activity. These attacks add to the psychological burden on frontline defenders.

Industry figures warn that high turnover risks weakening cyber resilience, especially in junior roles. Initiatives like Cybermindz call for better mental health support, while some argue for protections akin to those for first responders.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI agents complete first secure transaction with Mastercard and PayOS

PayOS and Mastercard have completed the first live agentic payment using a Mastercard Agentic Token, marking a pivotal step for AI-driven commerce. The demonstration, powered by Mastercard Agent Pay, extends the tokenisation infrastructure that already underpins mobile payments and card storage.

The system enables AI agents to initiate payments while enforcing consent, authentication, and fraud checks, thereby forming what Mastercard refers to as the trust layer. It shows how card networks are preparing for agentic transactions to become central to digital commerce.

Mastercard’s Chief Digital Officer, Pablo Fourez, stated that the company is developing a secure and interoperable ecosystem for AI-driven payments, underpinned by tokenized credentials. The framework aims to prepare for a future where the internet itself supports native agentic commerce.

For PayOS, the milestone represents a shift from testing to commercialisation. Chief executive Johnathan McGowan said the company is now onboarding customers and offering tools for fraud prevention, payments risk management, and improved user experiences.

The achievement signals a broader transition as agentic AI moves from pilot to real-world deployment. If security models remain effective, agentic payments could soon differentiate platforms, merchants, and issuers, embedding autonomy into digital transactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-powered Opera Neon browser launches with premium subscription

After its announcement in May, Opera has started rolling out Neon, its first AI-powered browser. Unlike traditional browsers, Neon is designed for professionals who want AI to simplify complex online workflows.

The browser introduces Tasks, which act like self-contained workspaces. AI can understand context, compare sources, and operate across multiple tabs simultaneously to manage projects more efficiently.

Neon also features cards and reusable AI prompts that users can customise or download from a community store, streamlining repeated actions and tasks.

Its standout tool, Neon Do, performs real-time on-screen actions such as opening tabs, filling forms, and gathering data, while keeping everything local. Opera says no data is shared, and all information is deleted after 30 days.

Neon is available by subscription at $19.90 per month. Invitations are limited during rollout, but Opera promises broader availability soon.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPB issues guidelines on GDPR-DSA tension for platforms

On 12 September 2025, the European Data Protection Board (EDPB) adopted draft guidelines detailing how online platforms should reconcile requirements under the GDPR and the Digital Services Act (DSA). The draft is now open for public consultation through 31 October.

The guidelines address key areas of tension, including proactive investigations, notice-and-action systems, deceptive design, recommender systems, age safety and transparency in advertising. They emphasise that DSA obligations must be implemented in ways consistent with GDPR principles.

For instance, the guidelines suggest that proactive investigations of illegal content should generally be grounded on ‘legitimate interests’, include safeguards for accuracy, and avoid automated decisions with legal effects.

Platforms are also told to provide users with non-profiling recommendation systems. The documents encourage data protection impact assessments (DPIAs) when identifying high risks.

The guidance also clarifies that the DSA does not override the GDPR. Platforms subject to both must ensure lawful, fair and transparent processing while integrating risk analysis and privacy by design. The draft guidelines include practical examples and cross-references to existing EDPB documents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!