AI-powered Opera Neon browser launches with premium subscription

After its announcement in May, Opera has started rolling out Neon, its first AI-powered browser. Unlike traditional browsers, Neon is designed for professionals who want AI to simplify complex online workflows.

The browser introduces Tasks, which act like self-contained workspaces. AI can understand context, compare sources, and operate across multiple tabs simultaneously to manage projects more efficiently.

Neon also features cards and reusable AI prompts that users can customise or download from a community store, streamlining repeated actions and tasks.

Its standout tool, Neon Do, performs real-time on-screen actions such as opening tabs, filling forms, and gathering data, while keeping everything local. Opera says no data is shared, and all information is deleted after 30 days.

Neon is available by subscription at $19.90 per month. Invitations are limited during rollout, but Opera promises broader availability soon.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

New Facebook tools help creators boost fan engagement

Facebook has introduced new tools designed to help creators increase engagement and build stronger communities on the platform. The update includes fan challenges, custom badges for top contributors, and new insights to track audience loyalty.

Fan challenges allow creators with over 100,000 followers to issue prompts inviting fans to share content on a theme or event. Contributions are displayed in a dedicated feed, with a leaderboard ranking entries by reactions.

Challenges can run for a week or stretch over several months, giving creators flexibility in engaging their audiences.

Meta has also launched custom fan badges for creators with more than one million followers, enabling them to rename Top Fan badges each month. The feature gives elite-level fans extra recognition and strengthens the sense of community. Fans can choose whether to accept the custom badge.

To complement these features, Facebook adds new metrics showing the number of Top Fans on a page. These insights help creators measure engagement efforts and reward their most dedicated followers.

The tools are now available to eligible creators worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lufthansa to cut thousands of jobs as AI reshapes operations

Lufthansa Group announced it will cut 4,000 jobs by 2030 as part of a restructuring drive powered by AI and digitalisation. Most of the affected positions will be administrative roles in Germany, with operational staff largely unaffected.

The company said it aims to improve efficiency by reducing duplication across its airlines Lufthansa through the use of AI, SWISS, Austrian Airlines, Brussels Airlines and ITA Airways. It noted that advances in AI would streamline work and allow greater integration within the group.

Despite the job cuts, demand for flights remains high. Capacity is constrained by limited aircraft and engine supply, which has kept planes full and revenue strong. Lufthansa said it expects significantly higher profitability by the end of the decade.

The airline also confirmed plans for the largest fleet modernisation in its history, with over 230 new aircraft to be delivered by 2030, including 100 long-haul jets. Lufthansa employed more than 101,000 people in 2024 and posted revenue of €37.6 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google tests AI hosts for YouTube Music

Google is testing AI-generated hosts for YouTube Music through its new YouTube Labs programme. The AI hosts will appear while users listen to mixes and radio stations, providing commentary, fan trivia, and stories to enrich the listening experience.

The feature is designed to resemble a radio jockey but relies on AI, so there is a risk of occasional inaccuracies.

YouTube Labs, similar to Google Labs, allows the company to trial new AI features and gather user feedback before wider release. The AI hosts are currently available to a limited group of US testers, who can sign up via YouTube Labs and snooze commentary for an hour or all day.

The rollout follows Google’s Audio Overviews in NotebookLM, which turns research papers and documents into podcast-style summaries. Past AI experiments on YouTube, such as automatic dubbing, faced criticism as viewers had limited control over translations.

The AI hosts experiment shows Google’s push to integrate AI across its apps, enhancing engagement while monitoring feedback before wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bye Bye Google AI hides unwanted AI results in Search

Google is pushing AI deeper into its services, with AI Overviews already reaching billions of users and AI Mode now added to Search. Chrome is also being rebranded as an AI-first browser.

Not all users welcome these changes. Concerns remain about accuracy, intrusive design and Google’s growing control over how information is displayed. Unlike other features, AI elements in Search cannot be turned off directly, leaving users reliant on third-party solutions.

One such solution is the new ‘Bye Bye, Google AI’ extension, which hides AI-generated results and unwanted blocks such as sponsored links, shopping sections and discussion forums.

The extension works across Chromium-based browsers, though it relies on CSS and may break when Google updates its interface.

A debate that reflects wider unease about AI in Search.

While Google claims it improves user experience, critics argue it risks spreading false information and keeping traffic within Google’s ecosystem rather than directing users to original publishers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global rollout of teen accounts for Facebook and Messenger

US tech giant Meta is expanding its dedicated teen accounts to Facebook and Messenger users worldwide, extending a safety system on Instagram. The move introduces more parental controls and restrictions to protect younger users on Meta’s platforms.

The accounts, now mandatory for teens, include stricter privacy settings that limit contact with unknown adults. Parents can supervise how their children use the apps, monitor screen time, and view who their teens are messaging.

For younger users aged 13 to 15, parental permission is required before adjusting safety-related settings. Meta is also deploying AI tools to detect teens lying about their age.

Alongside the global rollout, Instagram is expanding a school partnership programme in the US, allowing middle and high schools to report bullying and problematic behaviour directly.

The company says early feedback from participating schools has been positive, and the scheme is now open to all schools nationwide.

An expansion that comes as Meta faces lawsuits and investigations over its record on child safety. By strengthening parental controls and school-based reporting, the company aims to address growing criticism while tightening protections for its youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to introduce mandatory digital ID for work

The UK government has announced plans to make digital ID mandatory for proving the right to work by the end of the current Parliament, expected no later than 2029. Prime Minister Sir Keir Starmer said the scheme would tighten controls on illegal employment while offering wider benefits for citizens.

The digital ID will be stored on smartphones in a format similar to contactless payment cards or the NHS app. It is expected to include core details such as name, date of birth, nationality or residency status, and a photo.

The system aims to provide a more consistent and secure alternative to paper-based checks, reducing the risk of forged documents and streamlining verification for employers.

Officials believe the scheme could extend beyond employment, potentially simplifying access to driving licences, welfare, childcare, and tax records.

A consultation later in the year will decide whether additional data, such as residential addresses, should be integrated. The government has also pledged accessibility for citizens unable to use smartphones.

The proposal has faced political opposition, with critics warning of privacy risks, administrative burdens, and fears of creating a de facto compulsory ID card system.

Despite these objections, the government argues that digital ID will strengthen border controls, counter the shadow economy, and modernise public service access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn expands AI training with default data use

LinkedIn will use member profile data to train its AI systems by default from 3 November 2025. The policy, already in place in the US and select markets, will now extend to more regions, mainly for 18+ users who prefer not to share their information and must opt out manually via account settings.

According to LinkedIn, the types of data that may be used include account details, email addresses, payment and subscription information, and service-related data such as IP addresses, device IDs, and location information.

Once disabled, profiles will no longer be added to AI training, although information collected earlier may remain in the system. Users can request the removal of past data through a Data Processing Objection Form.

Meta and X have already adopted similar practices in the US, allowing their platforms to use user-generated posts for AI training. LinkedIn insists its approach complies with privacy rules but leaves the choice in members’ hands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot