Greece considers social media ban for under-16s, says Mitsotakis

Greek Prime Minister Kyriakos Mitsotakis has signalled that Greece may consider banning social media use for children under 16.

He raised the issue during a UN event in New York, hosted by Australia, titled ‘Protecting Children in the Digital Age’, held as part of the 80th UN General Assembly.

Mitsotakis emphasised that any restrictions would be coordinated with international partners, warning that the world is carrying out the largest uncontrolled experiment on children’s minds through unchecked social media exposure.

He cautioned that the long-term effects are uncertain but unlikely to be positive.

The prime minister pointed to new national initiatives, such as the ban on mobile phone use in schools, which he said has transformed the educational experience.

He also highlighted the recent launch of parco.gov.gr, which provides age verification and parental control tools to support families in protecting children online.

Mitsotakis stressed that difficulties enforcing such measures cannot serve as an excuse for inaction, urging global cooperation to address the growing risks children face in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

California enacts first state-level AI safety law

In the US, California Governor Gavin Newsom has signed SB 53, a landmark law establishing transparency and safety requirements for large AI companies.

The legislation obliges major AI developers such as OpenAI, Anthropic, Meta, and Google DeepMind to disclose their safety protocols. It also introduces whistle-blower protections and a reporting mechanism for safety incidents, including cyberattacks and autonomous AI behaviour not covered by the EU AI Act.

Reactions across the industry have been mixed. Anthropic supported the law, while Meta and OpenAI lobbied against it, with OpenAI publishing an open letter urging Newsom not to sign. Tech firms have warned that state-level measures could create a patchwork of regulation that stifles innovation.

Despite resistance, the law positions California as a national leader in AI governance. Newsom said the state had demonstrated that it was possible to safeguard communities without stifling growth, calling AI ‘the new frontier in innovation’.

Similar legislation is under consideration in New York, while California lawmakers are also debating SB 243, a separate bill that would regulate AI companion chatbots.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

YouTube settles Donald Trump lawsuit over account suspension for $24.5 million

YouTube has agreed to a $24.5 million settlement to resolve a lawsuit filed by President Donald Trump, stemming from the platform’s decision to suspend his account after the 6 January 2021 Capitol riot.

The lawsuit was part of a broader legal push by Trump against major tech companies over what he calls politically motivated censorship.

As part of the deal, YouTube will donate $22 million to the Trust for the National Mall on Trump’s behalf, funding a new $200 million White House ballroom project. Another $2.5 million will go to co-plaintiffs, including the American Conservative Union and author Naomi Wolf.

The settlement includes no admission of wrongdoing by YouTube and was intended to avoid further legal costs. The move follows similar multimillion-dollar settlements by Meta and X, which also suspended Trump’s accounts post-January 6.

Critics argue the settlement signals a retreat from consistent content moderation. Media scholar Timothy Koskie warned it sets a troubling precedent for global digital governance and selective enforcement.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New Facebook tools help creators boost fan engagement

Facebook has introduced new tools designed to help creators increase engagement and build stronger communities on the platform. The update includes fan challenges, custom badges for top contributors, and new insights to track audience loyalty.

Fan challenges allow creators with over 100,000 followers to issue prompts inviting fans to share content on a theme or event. Contributions are displayed in a dedicated feed, with a leaderboard ranking entries by reactions.

Challenges can run for a week or stretch over several months, giving creators flexibility in engaging their audiences.

Meta has also launched custom fan badges for creators with more than one million followers, enabling them to rename Top Fan badges each month. The feature gives elite-level fans extra recognition and strengthens the sense of community. Fans can choose whether to accept the custom badge.

To complement these features, Facebook adds new metrics showing the number of Top Fans on a page. These insights help creators measure engagement efforts and reward their most dedicated followers.

The tools are now available to eligible creators worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT gets family safety update with parental controls

OpenAI has introduced new parental controls for ChatGPT, giving families greater oversight of how teens use the AI platform. The tools, which are live for all users, allow parents to link accounts with their children and manage settings through a simple control dashboard.

The system introduces stronger safeguards for teen accounts, including filters on graphic or harmful content and restrictions on roleplay involving sex, violence or extreme beauty ideals.

Parents can also fine-tune features such as voice mode, memory, image generation, or set quiet hours when ChatGPT cannot be accessed.

A notification mechanism has been added to alert parents if a teen shows signs of acute distress, escalating to emergency services in critical cases. OpenAI said the controls were shaped by consultation with experts, advocacy groups, and policymakers and will be expanded as research evolves.

To complement the parental controls, a new online resource hub has been launched to help families learn how ChatGPT works and explore positive uses in study, creativity and daily life.

OpenAI also plans to roll out an age-prediction system that automatically applies teen-appropriate settings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EDPB issues guidelines on GDPR-DSA tension for platforms

On 12 September 2025, the European Data Protection Board (EDPB) adopted draft guidelines detailing how online platforms should reconcile requirements under the GDPR and the Digital Services Act (DSA). The draft is now open for public consultation through 31 October.

The guidelines address key areas of tension, including proactive investigations, notice-and-action systems, deceptive design, recommender systems, age safety and transparency in advertising. They emphasise that DSA obligations must be implemented in ways consistent with GDPR principles.

For instance, the guidelines suggest that proactive investigations of illegal content should generally be grounded on ‘legitimate interests’, include safeguards for accuracy, and avoid automated decisions with legal effects.

Platforms are also told to provide users with non-profiling recommendation systems. The documents encourage data protection impact assessments (DPIAs) when identifying high risks.

The guidance also clarifies that the DSA does not override the GDPR. Platforms subject to both must ensure lawful, fair and transparent processing while integrating risk analysis and privacy by design. The draft guidelines include practical examples and cross-references to existing EDPB documents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Internal chatbot Veritas helps Apple refine Siri features ahead of launch

Apple is internally testing its upcoming Siri upgrade with a chatbot-style tool called Veritas, according to a report by Bloomberg. The app enables employees to experiment with new capabilities and provide structured feedback before a public launch.

Veritas enables testers to type questions, engage in conversations, and revisit past chats, making it similar to ChatGPT and Gemini. Apple is reportedly using the feedback to refine Siri’s features, including data search and in-app actions.

The tool remains internal and is not planned for public release. Its purpose is to make Siri’s upgrade process more efficient and guide Apple’s decision on future chatbot-like experiences.

Apple executives have said they prefer integrating AI into daily tasks instead of offering a separate chatbot. Craig Federighi confirmed at WWDC that Apple is focused on natural task assistance rather than a standalone product.

Bloomberg reports that the new Siri will use Apple’s own AI models alongside external systems like Google’s Gemini, with a launch expected next spring.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google tests AI hosts for YouTube Music

Google is testing AI-generated hosts for YouTube Music through its new YouTube Labs programme. The AI hosts will appear while users listen to mixes and radio stations, providing commentary, fan trivia, and stories to enrich the listening experience.

The feature is designed to resemble a radio jockey but relies on AI, so there is a risk of occasional inaccuracies.

YouTube Labs, similar to Google Labs, allows the company to trial new AI features and gather user feedback before wider release. The AI hosts are currently available to a limited group of US testers, who can sign up via YouTube Labs and snooze commentary for an hour or all day.

The rollout follows Google’s Audio Overviews in NotebookLM, which turns research papers and documents into podcast-style summaries. Past AI experiments on YouTube, such as automatic dubbing, faced criticism as viewers had limited control over translations.

The AI hosts experiment shows Google’s push to integrate AI across its apps, enhancing engagement while monitoring feedback before wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Bye Bye Google AI hides unwanted AI results in Search

Google is pushing AI deeper into its services, with AI Overviews already reaching billions of users and AI Mode now added to Search. Chrome is also being rebranded as an AI-first browser.

Not all users welcome these changes. Concerns remain about accuracy, intrusive design and Google’s growing control over how information is displayed. Unlike other features, AI elements in Search cannot be turned off directly, leaving users reliant on third-party solutions.

One such solution is the new ‘Bye Bye, Google AI’ extension, which hides AI-generated results and unwanted blocks such as sponsored links, shopping sections and discussion forums.

The extension works across Chromium-based browsers, though it relies on CSS and may break when Google updates its interface.

A debate that reflects wider unease about AI in Search.

While Google claims it improves user experience, critics argue it risks spreading false information and keeping traffic within Google’s ecosystem rather than directing users to original publishers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify launches new policies on AI and music spam

Spotify announced new measures to address AI risks in music, aiming to protect artists’ identities and preserve trust on the platform. The company said AI can boost creativity but also enable harmful content like impersonations and spam that exploit artists and cut into royalties.

A new impersonation policy has been introduced, clarifying that AI-generated vocal clones of artists are only permitted with explicit authorisation. Spotify is strengthening processes to block fraudulent uploads and mismatches, giving artists quicker recourse when their work is misused.

The platform will launch a new spam filter this year to detect and curb manipulative practices like mass uploads and artificially short tracks. The system will be deployed cautiously, with updates added as new abuse tactics emerge, in order to safeguard legitimate creators.

In addition, Spotify will back an industry standard for AI disclosures in music credits, allowing artists and rights holders to show how AI was used in production. The company said these steps show its commitment to protecting artists, ensuring transparency, and fair royalties as AI reshapes the music industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot