Google tests AI hosts for YouTube Music

Google is testing AI-generated hosts for YouTube Music through its new YouTube Labs programme. The AI hosts will appear while users listen to mixes and radio stations, providing commentary, fan trivia, and stories to enrich the listening experience.

The feature is designed to resemble a radio jockey but relies on AI, so there is a risk of occasional inaccuracies.

YouTube Labs, similar to Google Labs, allows the company to trial new AI features and gather user feedback before wider release. The AI hosts are currently available to a limited group of US testers, who can sign up via YouTube Labs and snooze commentary for an hour or all day.

The rollout follows Google’s Audio Overviews in NotebookLM, which turns research papers and documents into podcast-style summaries. Past AI experiments on YouTube, such as automatic dubbing, faced criticism as viewers had limited control over translations.

The AI hosts experiment shows Google’s push to integrate AI across its apps, enhancing engagement while monitoring feedback before wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bye Bye Google AI hides unwanted AI results in Search

Google is pushing AI deeper into its services, with AI Overviews already reaching billions of users and AI Mode now added to Search. Chrome is also being rebranded as an AI-first browser.

Not all users welcome these changes. Concerns remain about accuracy, intrusive design and Google’s growing control over how information is displayed. Unlike other features, AI elements in Search cannot be turned off directly, leaving users reliant on third-party solutions.

One such solution is the new ‘Bye Bye, Google AI’ extension, which hides AI-generated results and unwanted blocks such as sponsored links, shopping sections and discussion forums.

The extension works across Chromium-based browsers, though it relies on CSS and may break when Google updates its interface.

A debate that reflects wider unease about AI in Search.

While Google claims it improves user experience, critics argue it risks spreading false information and keeping traffic within Google’s ecosystem rather than directing users to original publishers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta expands global rollout of teen accounts for Facebook and Messenger

US tech giant Meta is expanding its dedicated teen accounts to Facebook and Messenger users worldwide, extending a safety system on Instagram. The move introduces more parental controls and restrictions to protect younger users on Meta’s platforms.

The accounts, now mandatory for teens, include stricter privacy settings that limit contact with unknown adults. Parents can supervise how their children use the apps, monitor screen time, and view who their teens are messaging.

For younger users aged 13 to 15, parental permission is required before adjusting safety-related settings. Meta is also deploying AI tools to detect teens lying about their age.

Alongside the global rollout, Instagram is expanding a school partnership programme in the US, allowing middle and high schools to report bullying and problematic behaviour directly.

The company says early feedback from participating schools has been positive, and the scheme is now open to all schools nationwide.

An expansion that comes as Meta faces lawsuits and investigations over its record on child safety. By strengthening parental controls and school-based reporting, the company aims to address growing criticism while tightening protections for its youngest users.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK to introduce mandatory digital ID for work

The UK government has announced plans to make digital ID mandatory for proving the right to work by the end of the current Parliament, expected no later than 2029. Prime Minister Sir Keir Starmer said the scheme would tighten controls on illegal employment while offering wider benefits for citizens.

The digital ID will be stored on smartphones in a format similar to contactless payment cards or the NHS app. It is expected to include core details such as name, date of birth, nationality or residency status, and a photo.

The system aims to provide a more consistent and secure alternative to paper-based checks, reducing the risk of forged documents and streamlining verification for employers.

Officials believe the scheme could extend beyond employment, potentially simplifying access to driving licences, welfare, childcare, and tax records.

A consultation later in the year will decide whether additional data, such as residential addresses, should be integrated. The government has also pledged accessibility for citizens unable to use smartphones.

The proposal has faced political opposition, with critics warning of privacy risks, administrative burdens, and fears of creating a de facto compulsory ID card system.

Despite these objections, the government argues that digital ID will strengthen border controls, counter the shadow economy, and modernise public service access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn expands AI training with default data use

LinkedIn will use member profile data to train its AI systems by default from 3 November 2025. The policy, already in place in the US and select markets, will now extend to more regions, mainly for 18+ users who prefer not to share their information and must opt out manually via account settings.

According to LinkedIn, the types of data that may be used include account details, email addresses, payment and subscription information, and service-related data such as IP addresses, device IDs, and location information.

Once disabled, profiles will no longer be added to AI training, although information collected earlier may remain in the system. Users can request the removal of past data through a Data Processing Objection Form.

Meta and X have already adopted similar practices in the US, allowing their platforms to use user-generated posts for AI training. LinkedIn insists its approach complies with privacy rules but leaves the choice in members’ hands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK sets up expert commission to speed up NHS adoption of AI

Doctors, researchers and technology leaders will work together to accelerate the safe adoption of AI in the NHS, under a new commission launched by the Medicines and Healthcare products Regulatory Agency (MHRA).

The body will draft recommendations to modernise healthcare regulation, ensuring patients gain faster access to innovations while maintaining safety and public trust.

MHRA stressed that clear rules are vital as AI spreads across healthcare, already helping to diagnose conditions such as lung cancer and strokes in hospitals across the UK.

Backed by ministers, the initiative aims to position Britain as a global hub for health tech investment. Companies including Google and Microsoft will join clinicians, academics, and patient advocates to advise on the framework, expected to be published next year.

A commission that will also review the regulatory barriers slowing adoption of tools such as AI-driven note-taking systems, which early trials suggest can significantly boost efficiency in clinical care.

Officials say the framework will provide much-needed clarity for AI in radiology, pathology, and virtual care, supporting the digital transformation of NHS.

MHRA chief executive Lawrence Tallon called the commission a ‘cultural shift’ in regulation. At the same time, Technology Secretary Liz Kendall said it will ensure patients benefit from life-saving technologies ‘quickly and safely’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New Meta feature floods users with AI slop in TikTok-style feed

Meta has launched a new short-form video feed called Vibes inside its Meta AI app and on meta.ai, offering users endless streams of AI-generated content. The format mimics TikTok and Instagram Reels but consists entirely of algorithmically generated clips.

Mark Zuckerberg unveiled the feature in an Instagram post showcasing surreal creations, from fuzzy creatures leaping across cubes to a cat kneading dough and even an AI-generated Egyptian woman taking a selfie in antiquity.

Users can generate videos from scratch or remix existing clips by adding visuals, music, or stylistic effects before posting to Vibes, sharing via direct message, or cross-posting to Instagram and Facebook Stories.

Meta partnered with Midjourney and Black Forest Labs to support the early rollout, though it plans to transition to its AI models.

The announcement, however, was derided by users, who criticised the platform for adding yet more ‘AI slop’ to already saturated feeds. One top comment under Zuckerberg’s post bluntly read: ‘gang nobody wants this’.

A launch that comes as Meta ramps up its AI investment to catch up with rivals OpenAI, Anthropic, and Google DeepMind.

Earlier during the year, the company consolidated its AI teams into Meta Superintelligence Labs and reorganised them into four units focused on foundation models, research, product integration, and infrastructure.

Despite the strategic shift, many question whether Vibes adds value or deepens user fatigue with generative content.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube rolls back rules on Covid-19 and 2020 election misinformation

Google’s YouTube has announced it will reinstate accounts previously banned for repeatedly posting misinformation about Covid-19 and the 2020 US presidential election. The decision marks another rollback of moderation rules that once targeted health and political falsehoods.

The platform said the move reflects a broader commitment to free expression and follows similar changes at Meta and Elon Musk’s X.

YouTube had already scrapped policies barring repeat claims about Covid-19 and election outcomes, rules that had led to actions against figures such as Robert F. Kennedy Jr.’s Children’s Health Defense Fund and Senator Ron Johnson.

An announcement that came in a letter to House Judiciary Committee Chair Jim Jordan, amid a Republican-led investigation into whether the Biden administration pressured tech firms to remove certain content.

YouTube claimed the White House created a political climate aimed at shaping its moderation, though it insisted its policies were enforced independently.

The company said that US conservative creators have a significant role in civic discourse and will be allowed to return under the revised rules. The move highlights Silicon Valley’s broader trend of loosening restrictions on speech, especially under pressure from right-leaning critics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

LinkedIn default AI data sharing faces Dutch privacy watchdog scrutiny

The Dutch privacy watchdog, Autoriteit Persoonsgegevens (AP), is warning LinkedIn users in the Netherlands to review their settings to prevent their data from being used for AI training.

LinkedIn plans to use names, job titles, education history, locations, skills, photos, and public posts from European users to train its systems. Private messages will not be included; however, the sharing option is enabled by default.

AP Deputy Chair Monique Verdier said the move poses significant risks. She warned that once personal data is used to train a model, it cannot be removed, and its future uses are unpredictable.

LinkedIn, headquartered in Dublin, falls under the jurisdiction of the Data Protection Commission in Ireland, which will determine whether the plan can proceed. The AP said it is working with Irish and EU counterparts and has already received complaints.

Users must opt out by 3 November if they do not wish to have their data used. They can disable the setting via the AP’s link or manually in LinkedIn under ‘settings & privacy’ → ‘data privacy’ → ‘data for improving generative AI’.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Apple escalates fight against EU digital law

US tech giant Apple has called for the repeal of the EU’s Digital Markets Act, claiming the rules undermine user privacy, disrupt services, and erode product quality.

The company urged the Commission to replace the legislation with a ‘fit for purpose’ framework, or hand enforcement to an independent agency insulated from political influence.

Apple argued that the Act’s interoperability requirements had delayed the rollout of features in the EU, including Live Translation on AirPods and iPhone mirroring. Additionally, the firm accused the Commission of adopting extreme interpretations that created user vulnerabilities instead of protecting them.

Brussels has dismissed those claims. A Commission spokesperson stressed that DMA compliance is an obligation, not an option, and said the rules guarantee fair competition by forcing dominant platforms to open access to rivals.

A dispute that intensifies long-running friction between US tech firms and the EU regulators.

Apple has already appealed to the courts, with a public hearing scheduled in October, while Washington has criticised the bloc’s wider digital policy.

A clash has deepened transatlantic trade tensions, with the White House recently threatening tariffs after fresh fines against another American tech company.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!