Tilly Norwood creator accelerates AI-first entertainment push

The AI talent studio behind synthetic actress Tilly Norwood is preparing to expand what it calls the ‘Tilly-verse’, moving into a new phase of AI-first entertainment built around multiple digital characters.

Xicoia, founded by Particle6 and Tilly creator Eline van der Velden, is recruiting for 9 roles spanning writing, production, growth, and AI development, including a junior comedy writer, a social media manager, and a senior ‘AI wizard-in-chief’.

The UK-based studio says the hires will support Tilly’s planned 2026 expansion into on-screen appearances and direct fan interaction, alongside the introduction of new AI characters designed to coexist within the same fictional universe.

Van der Velden argues the project creates jobs rather than replacing them, positioning the studio as a response to anxieties around AI in entertainment and rejecting claims that Tilly is meant to displace human performers.

Industry concerns persist, however, with actors’ representatives disputing whether synthetic creations can be considered performers at all and warning that protecting human artists’ names, images, and likenesses remains critical as AI adoption accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Noyb study points to demand for tracking-free option

A new study commissioned by noyb reports that most users favour a tracking-free advertising option when navigating Pay or Okay systems. Researchers found low genuine support for data collection when participants were asked without pressure.

Consent rates rose sharply when users were presented only with payment or agreement to tracking, leading most to select consent. Findings indicate that the absence of a realistic alternative shapes outcomes more than actual preference.

Introduction of a third option featuring advertising without tracking prompted a strong shift, with most participants choosing that route. Evidence suggests users accept ad-funded models provided their behavioural data remains untouched.

Researchers observed similar patterns on social networks, news sites and other platforms, undermining claims that certain sectors require special treatment. Debate continues as regulators assess whether Pay or Okay complies with EU data protection rules such as the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK lawmakers push for binding rules on advanced AI

Growing political pressure is building in Westminster as more than 100 parliamentarians call for binding regulation on the most advanced AI systems, arguing that current safeguards lag far behind industry progress.

A cross-party group, supported by former defence and AI ministers, warns that unregulated superintelligent models could threaten national and global security.

The campaign, coordinated by Control AI and backed by tech figures including Skype co-founder Jaan Tallinn, urges Prime Minister Keir Starmer to distance the UK from the US stance against strict federal AI rules.

Experts such as Yoshua Bengio and senior peers argue that governments remain far behind AI developers, leaving companies to set the pace with minimal oversight.

Calls for action come after warnings from frontier AI scientists that the world must decide by 2030 whether to allow highly advanced systems to self-train.

Campaigners want the UK to champion global agreements limiting superintelligence development, establish mandatory testing standards and introduce an independent watchdog to scrutinise AI use in the public sector.

Government officials maintain that AI is already regulated through existing frameworks, though critics say the approach lacks urgency.

Pressure is growing for new, binding rules on the most powerful models, with advocates arguing that rapid advances mean strong safeguards may be needed within the next two years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU gains stronger ad oversight after TikTok agreement

Regulators in the EU have accepted binding commitments from TikTok aimed at improving advertising transparency under the Digital Services Act.

An agreement that follows months of scrutiny and addresses concerns raised in the Commission’s preliminary findings earlier in the year.

TikTok will now provide complete versions of advertisements exactly as they appear in user feeds, along with associated URLs, targeting criteria and aggregated demographic data.

Researchers will gain clearer insight into how advertisers reach users, rather than relying on partial or delayed information. The platform has also agreed to refresh its advertising repository within 24 hours.

Further improvements include new search functions and filters that make it easier for the public, civil society and regulators to examine advertising content.

These changes are intended to support efforts to detect scams, identify harmful products and analyse coordinated influence operations, especially around elections.

TikTok must implement its commitments to the EU within deadlines ranging from two to twelve months, depending on each measure.

The Commission will closely monitor compliance while continuing broader investigations into algorithmic design, protection of minors, data access and risks connected to elections and civic discourse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU targets X for breaking the Digital Services Act

European regulators have imposed a fine of one hundred and twenty million euros on X after ruling that the platform breached transparency rules under the Digital Services Act.

The Commission concluded that the company misled users with its blue checkmark system, restricted research access and operated an inadequate advertising repository.

Officials found that paid verification on X encouraged users to believe their accounts had been authenticated when, in fact, no meaningful checks were conducted.

EU regulators argued that such practices increased exposure to scams and impersonation fraud, rather than supporting trust in online communication.

The Commission also stated that the platform’s advertising repository lacked essential information and created barriers that prevented researchers and civil society from examining potential threats.

European authorities judged that X failed to offer legitimate access to public data for eligible researchers. Terms of service blocked independent data collection, including scraping, while the company’s internal processes created further obstacles.

Regulators believe such restrictions frustrate efforts to study misinformation, influence campaigns and other systemic risks within the EU.

X must now outline the steps it will take to end the blue checkmark infringement within sixty working days and deliver a wider action plan on data access and advertising transparency within ninety days.

Failure to comply could lead to further penalties as the Commission continues its broader investigation into information manipulation and illegal content across the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta moves investment from metaverse to AI smart glasses

Meta is redirecting part of its metaverse spending towards AI-powered glasses and wearables, aiming to capitalise on the growing interest in these devices. The shift comes after years of substantial investment in virtual reality, which has yet to convince investors of its long-term potential fully.

Reports indicate that Meta plans to reduce its metaverse budget by up to 30 percent, a move that lifted its share price by more than 3.4 percent. The company stated it has no broader changes planned, while offering no clarification on whether the adjustment will lead to job cuts.

The latest AI glasses, launched in September, received strong early feedback for features such as an in-lens display that can describe scenes and translate text. Their debut has intensified competition, with several industry players, including firms in China, racing to develop smart glasses and wearable technology.

Meta continues to face scepticism surrounding the metaverse, despite investing heavily in VR headsets and its Horizon Worlds platform. Interest in AI has surged, prompting the company to place a greater focus on large AI models, including those integrated into WhatsApp, and on producing more advanced smart devices.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Waterstones open to selling AI-generated books, but only with clear labelling

Waterstones CEO James Daunt has stated that the company is willing to stock books created using AI, provided the works are transparently labelled, and there is genuine customer demand.

In an interview on the BBC’s Big Boss podcast, Daunt stressed that Waterstones currently avoids placing AI-generated books on shelves and that his instinct as a bookseller is to ‘recoil’ from such titles. However, he emphasised that the decision ultimately rests with readers.

Daunt described the wider surge in AI-generated content as largely unsuitable for bookshops, saying most such works are not of a type Waterstones would typically sell. The publishing industry continues to debate the implications of generative AI, particularly around threats to authors’ livelihoods and the use of copyrighted works to train large language models.

A recent University of Cambridge survey found that more than half of published authors fear being replaced by AI, and two-thirds believe their writing has been used without permission to train models.

Despite these concerns, some writers are adopting AI tools for research or editing, while AI-generated novels and full-length works are beginning to emerge.

Daunt noted that Waterstones would consider carrying such titles if readers show interest, while making clear that the chain would always label AI-authored works to avoid misleading consumers. He added that readers tend to value the human connection with authors, suggesting that AI books are unlikely to be prominently featured in stores.

Daunt has led Waterstones since 2011, reshaping the chain by decentralising decision-making and removing the longstanding practice of publishers paying for prominent in-store placement. He also currently heads Barnes & Noble in the United States.

With both chains now profitable, Daunt acknowledged that a future share flotation is increasingly likely. However, no decision has been taken on whether London or New York would host any potential IPO.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UAE launches scholarship to develop future AI leaders

The UAE unveiled a scholarship programme to nurture future leaders in AI at MBZUAI. The initiative, guided by Sheikh Tahnoon bin Zayed, targets outstanding undergraduates beginning in the 2025 academic year.

Approximately 350 students will be supported over six years following a rigorous selection process. Applicants will be assessed for mathematical strength, leadership potential and entrepreneurial drive in line with national technological ambitions.

Scholars will gain financial backing alongside opportunities to represent the UAE internationally and develop innovative ventures. Senior officials said the programme strengthens the nation’s aim to build a world-class cohort of AI specialists.

MBZUAI highlighted its interdisciplinary approach that blends technical study with ethics, leadership and business education. Students will have access to advanced facilities, industry placements, and mentorships designed to prepare them for global technology roles.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Pope urges guidance for youth in an AI-shaped world

Pope Leo XIV urged global institutions to guide younger generations as they navigate the expanding influence of AI. He warned that rapid access to information cannot replace the deeper search for meaning and purpose.

Previously, the Pope had warned students not to rely solely on AI for educational support. He encouraged educators and leaders to help young people develop discernment and confidence when encountering digital systems.

Additionally, he called for coordinated action across politics, business, academia and faith communities to steer technological progress toward the common good. He argued that AI development should not be treated as an inevitable pathway shaped by narrow interests.

He noted that AI reshapes human relationships and cognition, raising concerns about its effects on freedom, creativity and contemplation. He insisted that safeguarding human dignity is essential to managing AI’s wide-ranging consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot