The AI talent studio behind synthetic actress Tilly Norwood is preparing to expand what it calls the ‘Tilly-verse’, moving into a new phase of AI-first entertainment built around multiple digital characters.
Xicoia, founded by Particle6 and Tilly creator Eline van der Velden, is recruiting for 9 roles spanning writing, production, growth, and AI development, including a junior comedy writer, a social media manager, and a senior ‘AI wizard-in-chief’.
The UK-based studio says the hires will support Tilly’s planned 2026 expansion into on-screen appearances and direct fan interaction, alongside the introduction of new AI characters designed to coexist within the same fictional universe.
Van der Velden argues the project creates jobs rather than replacing them, positioning the studio as a response to anxieties around AI in entertainment and rejecting claims that Tilly is meant to displace human performers.
Industry concerns persist, however, with actors’ representatives disputing whether synthetic creations can be considered performers at all and warning that protecting human artists’ names, images, and likenesses remains critical as AI adoption accelerates.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Nigeria’s National Information Technology Development Agency (NITDA) has issued an urgent advisory on security weaknesses in OpenAI’s ChatGPT models. The agency warned that flaws affecting GPT-4o and GPT-5 could expose users to data leakage through indirect prompt injection.
According to NITDA’s Computer Emergency Readiness and Response Team, seven critical flaws were identified that allow hidden instructions to be embedded in web content. Malicious prompts can be triggered during routine browsing, search or summarisation without user interaction.
The advisory warned that attackers can bypass safety filters, exploit rendering bugs and manipulate conversation context. Some techniques allow injected instructions to persist across future interactions by interfering with the models’ memory functions.
While OpenAI has addressed parts of the issue, NITDA said large language models still struggle to reliably distinguish malicious data from legitimate input. Risks include unintended actions, information leakage and long-term behavioural influence.
NITDA urged users and organisations in Nigeria to apply updates promptly and limit browsing or memory features when not required. The agency said that exposing AI systems to external tools increases their attack surface and demands stronger safeguards.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
South Korea has gained nationwide satellite coverage as Starlink enters the market and expands the country’s already advanced connectivity landscape.
The service offers high-speed access through a dense LEO network and arrives with subscription options for households, mobile users and businesses.
Analysts see meaningful benefits for regions that are difficult to serve through fixed networks, particularly in mountainous areas and offshore locations.
Enterprise interest has grown quickly. Maritime operators moved first, with SK Telink and KT SAT securing contracts as Starlink went live. Large fleets will now adopt satellite links for navigation support, remote management and stronger emergency communication.
The technology has also reached the aviation sector as carriers under Hanjin Group plan to install Starlink across all aircraft, aiming to introduce stable in-flight Wi-Fi from 2026.
Although South Korea’s fibre and 5G networks offer far higher peak speeds, Starlink provides reliability where terrestrial networks cannot operate. Industry observers expect limited uptake from mainstream households but anticipate significant momentum in maritime transport, aviation, construction and energy.
An expansion in South Korea that marks one of Starlink’s most strategic Asia-Pacific moves, driven by industrial demand and early partnerships.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Regulators in the EU have accepted binding commitments from TikTok aimed at improving advertising transparency under the Digital Services Act.
An agreement that follows months of scrutiny and addresses concerns raised in the Commission’s preliminary findings earlier in the year.
TikTok will now provide complete versions of advertisements exactly as they appear in user feeds, along with associated URLs, targeting criteria and aggregated demographic data.
Researchers will gain clearer insight into how advertisers reach users, rather than relying on partial or delayed information. The platform has also agreed to refresh its advertising repository within 24 hours.
Further improvements include new search functions and filters that make it easier for the public, civil society and regulators to examine advertising content.
These changes are intended to support efforts to detect scams, identify harmful products and analyse coordinated influence operations, especially around elections.
TikTok must implement its commitments to the EU within deadlines ranging from two to twelve months, depending on each measure.
The Commission will closely monitor compliance while continuing broader investigations into algorithmic design, protection of minors, data access and risks connected to elections and civic discourse.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.
The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.
The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.
Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.
eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.
The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.
These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.
Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European regulators have imposed a fine of one hundred and twenty million euros on X after ruling that the platform breached transparency rules under the Digital Services Act.
The Commission concluded that the company misled users with its blue checkmark system, restricted research access and operated an inadequate advertising repository.
Officials found that paid verification on X encouraged users to believe their accounts had been authenticated when, in fact, no meaningful checks were conducted.
EU regulators argued that such practices increased exposure to scams and impersonation fraud, rather than supporting trust in online communication.
The Commission also stated that the platform’s advertising repository lacked essential information and created barriers that prevented researchers and civil society from examining potential threats.
European authorities judged that X failed to offer legitimate access to public data for eligible researchers. Terms of service blocked independent data collection, including scraping, while the company’s internal processes created further obstacles.
Regulators believe such restrictions frustrate efforts to study misinformation, influence campaigns and other systemic risks within the EU.
X must now outline the steps it will take to end the blue checkmark infringement within sixty working days and deliver a wider action plan on data access and advertising transparency within ninety days.
Failure to comply could lead to further penalties as the Commission continues its broader investigation into information manipulation and illegal content across the platform.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Greece is confronting a rapid rise in cybercrime as AI strengthens the tools available to criminals, according to the head of the National Cyber Security Authority.
Michael Bletsas warned that Europe is already experiencing hybrid conflict, with Northeastern states facing severe incidents that reveal a digital frontline. Greece has not endured physical sabotage or damage to its infrastructure, yet cyberattacks remain a pressing concern.
Bletsas noted that most activity involves cybercrime instead of destructive action. He pointed to the expansion of cyberactivism and vandalism through denial-of-service attacks, which usually cause no lasting harm.
The broader problem stems from a surge in AI-driven intrusions and espionage, which offer new capabilities to malicious groups and create a more volatile environment.
Moreover, Bletsas said that the physical and digital worlds should be viewed as a single, interconnected sphere, with security designed around shared principles rather than being treated as separate domains.
Authorities in Taiwan will block the Chinese social media and shopping app RedNote for a year following a surge in online scams linked to the platform. Officials report that more than 1,700 fraud cases have been linked to the app since last year, resulting in losses exceeding NT$247 million.
Regulators report that the company failed to meet required data-security standards and did not respond to requests for a plan to strengthen cybersecurity.
Internet providers have been instructed to restrict access, affecting several million users who now see a security warning message when opening the app.
Concerns over Beijing’s online influence and the spread of disinformation have added pressure on Taiwanese authorities to tighten oversight of Chinese platforms.
RedNote’s operators are also facing scrutiny in mainland China, where regulators have criticised the company over what they labelled ‘negative’ content.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google has begun rolling out the Gemini 3 Deep Think mode to AI Ultra subscribers, offering enhanced reasoning for complex maths, science and logic tasks. The rollout follows last month’s preview during the Gemini 3 family release, allowing users to activate the mode directly within the Gemini app.
Deep Think builds on earlier Gemini 2.5 variants by utilising what Google refers to as parallel reasoning to test multiple hypotheses simultaneously. Early benchmark results show gains on structured problem-solving tasks, with improvements recorded on assessments such as Humanity’s Last Exam and ARC-AGI-2.
Subscribers can try the mode by selecting Deep Think in the prompt bar and choosing Gemini 3 Pro. Google states that the broader Gemini 3 upgrade enhances reliability when following lengthy instructions and reduces the need for repeated prompts during multi-step tasks.
Gemini 3 features stronger multimodal handling, enabling analysis of text, images, screenshots, PDFs and video. Capabilities include summarising lengthy material, interpreting detailed visuals and explaining graphs or charts with greater accuracy.
Larger context windows and improved planning support extended workflows such as research assistance and structured information management. Google describes Gemini 3 as its most secure model to date, with reinforced protections around sensitive or misleading queries.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
SoftBank chief Masayoshi Son told South Korean President Lee Jae Myung that advanced AI could surpass humans by an extreme margin. He suggested future systems may be 10,000 times more capable than people. The remarks came during a meeting in Seoul focused on national AI ambitions.
Son compared the potential intelligence gap to the difference between humans and goldfish. He said AI might relate to humans as humans relate to pets. Lee acknowledged the vision but admitted feeling uneasy about the scale of the described change.
Son argued that superintelligent systems would not threaten humans physically, noting they lack biological needs. He framed coexistence as the likely outcome. His comments followed renewed political interest in positioning South Korea as an AI leader.
The debate turned to cultural capability when Lee asked whether AI might win the Nobel Prize in Literature. Son said such an achievement was plausible. He pointed to fast-moving advances that continue to challenge expectations about machine creativity.
Researchers say artificial superintelligence remains theoretical, but early steps toward AGI may emerge within a decade. Many expect systems to outperform humans across a wide set of tasks. Policy discussions in South Korea reflect growing urgency around AI governance.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!