Teen chatbot use surges across the US

Nearly a third of US teenagers engage with AI chatbots each day, according to new Pew data. Researchers say nearly 70% have tried a chatbot, reflecting growing dependence on digital tools during schoolwork and leisure time. Concerns remain over exposure to mature content and possible mental health harms.

Pew surveyed almost 1,500 US teens aged 13 to 17, finding broadly similar usage patterns across gender and income. Older teens reported higher engagement, while Black and Hispanic teens showed slightly greater adoption than White peers.

Experts warn that frequent chatbot use may hinder development or encourage cheating in academic settings. Safety groups have urged parents to limit access to companion-like AI tools, citing risks posed by romantic or intimate interactions with minors.

Companies are now rolling out safeguards in response to public scrutiny and legal pressure. OpenAI and Character.AI have tightened controls, while Meta says it has adjusted policies following reports of inappropriate exchanges.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Teens worldwide divided over Australia’s under-16 social media ban

As Australia prepares to enforce the world’s first nationwide under-16 social-media ban on 10 December 2025, young people across the globe are voicing sharply different views about the move.

Some teens view it as an opportunity for a digital ‘detox’, a chance to step back from the constant social media pressure. Others argue the law is extreme, unfair, and likely to push youth toward less regulated corners of the internet.

In Mumbai, 19-year-old Pratigya Jena said the debate isn’t simple: ‘nothing is either black or white.’ She acknowledged that social media can help young entrepreneurs, but also warned that unrestricted access exposes children to inappropriate content.

Meanwhile, in Berlin, 13-year-old Luna Drewes expressed cautious optimism; she felt the ban might help reduce the pressure to conform to beauty standards that are often amplified online. Another teen, 15-year-old Enno Caro Brandes, said he understood the motivation but admitted he couldn’t imagine giving up social media altogether.

In Doha, older teens voiced more vigorous opposition. Sixteen-year-old Firdha Razak called the ban ‘really stupid,’ while sixteen-year-old Youssef Walid argued that it would be trivial to bypass using VPNs. Both said they feared losing vital social and communication outlets.

Some, like 15-year-old Mitchelle Okinedo from Lagos, suggested the ban ignored how deeply embedded social media is in modern life: ‘We were born with it,’ she said, hinting that simply cutting access may be unrealistic. Others noted the role of social media in self-expression, especially in areas where offline spaces are limited.

Even within Australia, opinions diverge. A 15-year-old named Layton Lewis said he doubted the ban would have significant effects. His mother, Emily, meanwhile, welcomed the change, hoping it might encourage more authentic offline friendships rather than ‘illusory’ online interactions.

The variety of reactions underscores how the law is approaching a stark test: while some see potential mental health or safety gains, many worry about the rights of teens, enforcement effectiveness, and whether simply banning access truly addresses the underlying risks.

As commentary and activism ramp up around digital-age regulation, few expect consensus, but many do expect the debate to shape future policy beyond Australia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Mitigated ads personalisation coming to Meta platforms in the EU

Meta has agreed to introduce a less personalised ads option for Facebook and Instagram users in the EU, as part of efforts to comply with the bloc’s Digital Markets Act and address concerns over data use and user consent.

Under the revised model, users will be able to access Meta’s social media platforms without agreeing to extensive personal data processing for fully personalised ads. Instead, they can opt for an alternative experience based on significantly reduced data inputs, resulting in more limited ad targeting.

The option is set to roll out across the EU from January 2026. It marks the first time Meta has offered users a clear choice between highly personalised advertising and a reduced-data model across its core platforms.

The change follows months of engagement between Meta and Brussels after the European Commission ruled in April that the company had breached the DMA. Regulators stated that Meta’s previous approach had failed to provide users with a genuine and effective choice over how their data was used for advertising.

Once implemented, the Commission said it will gather evidence and feedback from Meta, advertisers, publishers, and other stakeholders. The goal is to assess the extent to which the new option is adopted and whether it significantly reshapes competition and data practices in the EU digital advertising market.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Survey reveals split views on AI in academic peer review

Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.

Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.

Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.

A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.

Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.

Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.

Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Real-time journalism becomes central to Meta AI strategy

Meta has signed commercial agreements with news publishers to feed real-time reporting into Meta AI, enabling its chatbot to answer news-related queries with up-to-date information from multiple editorial sources.

The company said responses will include links to full articles, directing users to publishers’ websites and helping partners reach new audiences beyond traditional platform distribution.

Initial partners span US and international outlets, covering global affairs, politics, entertainment, and sports, with Meta signalling that additional publishing deals are in the works.

The shift marks a recalibration. Meta previously reduced its emphasis on news across Facebook and ended most publisher payments, but now sees licensed reporting as essential to improving AI accuracy and relevance.

Facing intensifying competition in the AI market, Meta is positioning real-time journalism as a differentiator for its chatbot, which is available across its apps and to users worldwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Creatives warn that AI is reshaping their jobs

AI is accelerating across creative fields, raising concerns among workers who say the technology is reshaping livelihoods faster than anyone expected.

A University of Cambridge study recently found that more than two-thirds of creative professionals fear AI has undermined their job security, and many now describe the shift as unavoidable.

One of them is Norwich-based artist Aisha Belarbi, who says the rise of image-generation tools has made commissions harder to secure as clients ‘can just generate whatever they want’. Although she works in both traditional and digital media, Belarbi says she increasingly struggles to distinguish original art from AI output. That uncertainty, she argues, threatens the value of lived experience and the labour behind creative work.

Others are embracing the change. Videographer JP Allard transformed his Milton Keynes production agency after discovering the speed and scale of AI-generated video. His company now produces multilingual ‘digital twins’ and fully AI-generated commercials, work he says is quicker and cheaper than traditional filming. Yet he acknowledges that the pace of change can leave staff behind and says retraining has not kept up with the technology.

For musician Ross Stewart, the concern centres on authenticity. After listening to what he later discovered was an AI-generated blues album, he questioned the impact of near-instant song creation on musicians’ livelihoods and exposure. He believes audiences will continue to seek human performance, but worries that the market for licensed music is already shifting towards AI alternatives.

Copywriter Niki Tibble has experienced similar pressures. Returning from maternity leave, she found that AI tools had taken over many entry-level writing tasks. While some clients still prefer human writers for strategy, nuance and brand voice, Tibble’s work has increasingly shifted toward reviewing and correcting AI-generated copy. She says the uncertainty leaves her unsure whether her role will exist in a decade.

Across these stories, creative workers describe a sector in rapid transition. While some see new opportunities, many fear the speed of adoption and a future where AI replaces the very work that has long defined their craft.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tilly Norwood creator accelerates AI-first entertainment push

The AI talent studio behind synthetic actress Tilly Norwood is preparing to expand what it calls the ‘Tilly-verse’, moving into a new phase of AI-first entertainment built around multiple digital characters.

Xicoia, founded by Particle6 and Tilly creator Eline van der Velden, is recruiting for 9 roles spanning writing, production, growth, and AI development, including a junior comedy writer, a social media manager, and a senior ‘AI wizard-in-chief’.

The UK-based studio says the hires will support Tilly’s planned 2026 expansion into on-screen appearances and direct fan interaction, alongside the introduction of new AI characters designed to coexist within the same fictional universe.

Van der Velden argues the project creates jobs rather than replacing them, positioning the studio as a response to anxieties around AI in entertainment and rejecting claims that Tilly is meant to displace human performers.

Industry concerns persist, however, with actors’ representatives disputing whether synthetic creations can be considered performers at all and warning that protecting human artists’ names, images, and likenesses remains critical as AI adoption accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU gains stronger ad oversight after TikTok agreement

Regulators in the EU have accepted binding commitments from TikTok aimed at improving advertising transparency under the Digital Services Act.

An agreement that follows months of scrutiny and addresses concerns raised in the Commission’s preliminary findings earlier in the year.

TikTok will now provide complete versions of advertisements exactly as they appear in user feeds, along with associated URLs, targeting criteria and aggregated demographic data.

Researchers will gain clearer insight into how advertisers reach users, rather than relying on partial or delayed information. The platform has also agreed to refresh its advertising repository within 24 hours.

Further improvements include new search functions and filters that make it easier for the public, civil society and regulators to examine advertising content.

These changes are intended to support efforts to detect scams, identify harmful products and analyse coordinated influence operations, especially around elections.

TikTok must implement its commitments to the EU within deadlines ranging from two to twelve months, depending on each measure.

The Commission will closely monitor compliance while continuing broader investigations into algorithmic design, protection of minors, data access and risks connected to elections and civic discourse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU targets X for breaking the Digital Services Act

European regulators have imposed a fine of one hundred and twenty million euros on X after ruling that the platform breached transparency rules under the Digital Services Act.

The Commission concluded that the company misled users with its blue checkmark system, restricted research access and operated an inadequate advertising repository.

Officials found that paid verification on X encouraged users to believe their accounts had been authenticated when, in fact, no meaningful checks were conducted.

EU regulators argued that such practices increased exposure to scams and impersonation fraud, rather than supporting trust in online communication.

The Commission also stated that the platform’s advertising repository lacked essential information and created barriers that prevented researchers and civil society from examining potential threats.

European authorities judged that X failed to offer legitimate access to public data for eligible researchers. Terms of service blocked independent data collection, including scraping, while the company’s internal processes created further obstacles.

Regulators believe such restrictions frustrate efforts to study misinformation, influence campaigns and other systemic risks within the EU.

X must now outline the steps it will take to end the blue checkmark infringement within sixty working days and deliver a wider action plan on data access and advertising transparency within ninety days.

Failure to comply could lead to further penalties as the Commission continues its broader investigation into information manipulation and illegal content across the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!