New interview study tracks how workers adapt to AI

Anthropic has unveiled Anthropic Interviewer, an AI-driven tool for large-scale workplace interviews. The system used Claude to conduct 1,250 structured interviews with professionals across the general workforce, creative fields and scientific research.

In surveys, 86 percent said AI saves time and 65 percent felt satisfied with its role at work. Workers often hoped to automate routine tasks while preserving responsibilities that define their professional identity.

Creative workers reported major time savings and quality gains yet faced stigma and economic anxiety around AI use. Many hid AI tools from colleagues, feared market saturation and still insisted on retaining creative control.

Across groups, professionals imagined careers where humans oversee AI systems rather than perform every task themselves. Anthropic plans to keep using Anthropic Interviewer to track attitudes and inform future model design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan weighs easing rules on personal data use

Japan is preparing to relax restrictions on personal data use to support rapid AI development. Government sources say a draft bill aims to expand third-party access to sensitive information.

Plans include allowing medical histories and criminal records to be obtained without consent for statistical purposes. Japanese officials argue such access could accelerate research while strengthening domestic competitiveness.

New administrative fines would target companies that profit from unlawfully acquired data affecting large groups. Penalties would match any gains made through misconduct, reflecting growing concern over privacy abuses.

A government panel has reviewed the law since 2023 and intends to present reforms soon. Debate is expected to intensify as critics warn of increased risks to individual rights if support for AI development in this regard continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia seals $4.6 billion deal for new AI hub

OpenAI has partnered with Australian data centre operator NextDC to build a major AI campus in western Sydney. The companies signed an agreement covering development, planning and long-term operation of the vast site.

NextDC said the project will include a supercluster of graphics processors to support advanced AI workloads. Both firms intend to create infrastructure capable of meeting rapid global demand for high-performance computing.

Australia estimates the development at A$7 billion and forecasts thousands of jobs during construction and ongoing roles across engineering and operations. Officials say the initiative aligns with national efforts to strengthen technological capability.

Plans feature renewable energy procurement and cooling systems that avoid drinking water use, addressing sustainability concerns. Treasurer Jim Chalmers said the project reflects growing confidence in Australia’s talent, clean energy capacity and emerging AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tilly Norwood creator accelerates AI-first entertainment push

The AI talent studio behind synthetic actress Tilly Norwood is preparing to expand what it calls the ‘Tilly-verse’, moving into a new phase of AI-first entertainment built around multiple digital characters.

Xicoia, founded by Particle6 and Tilly creator Eline van der Velden, is recruiting for 9 roles spanning writing, production, growth, and AI development, including a junior comedy writer, a social media manager, and a senior ‘AI wizard-in-chief’.

The UK-based studio says the hires will support Tilly’s planned 2026 expansion into on-screen appearances and direct fan interaction, alongside the introduction of new AI characters designed to coexist within the same fictional universe.

Van der Velden argues the project creates jobs rather than replacing them, positioning the studio as a response to anxieties around AI in entertainment and rejecting claims that Tilly is meant to displace human performers.

Industry concerns persist, however, with actors’ representatives disputing whether synthetic creations can be considered performers at all and warning that protecting human artists’ names, images, and likenesses remains critical as AI adoption accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan aims to boost public AI use

Japan has drafted a new basic programme aimed at dramatically increasing public use of AI, with a target of raising utilisation from 50% to 80%. The government hopes the policy will strengthen domestic AI capabilities and reduce reliance on foreign technologies.

To support innovation, authorities plan to attract roughly ¥1 trillion in private investment, funding research, talent development and the expansion of AI businesses into emerging markets. Officials see AI as a core social infrastructure that supports both intellectual and practical functions.

The draft proposes a unified AI ecosystem where developers, chip makers and cloud providers collaborate to strengthen competitiveness and reduce Japan’s digital trade deficit. AI adoption is also expected to extend across all ministries and government agencies.

Prime Minister Sanae Takaichi has pledged to make Japan the easiest country in the world for AI development and use. The Cabinet is expected to approve the programme before the end of the year, paving the way for accelerated research and public-private investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NITDA warns of prompt injection risks in ChatGPT models

Nigeria’s National Information Technology Development Agency (NITDA) has issued an urgent advisory on security weaknesses in OpenAI’s ChatGPT models. The agency warned that flaws affecting GPT-4o and GPT-5 could expose users to data leakage through indirect prompt injection.

According to NITDA’s Computer Emergency Readiness and Response Team, seven critical flaws were identified that allow hidden instructions to be embedded in web content. Malicious prompts can be triggered during routine browsing, search or summarisation without user interaction.

The advisory warned that attackers can bypass safety filters, exploit rendering bugs and manipulate conversation context. Some techniques allow injected instructions to persist across future interactions by interfering with the models’ memory functions.

While OpenAI has addressed parts of the issue, NITDA said large language models still struggle to reliably distinguish malicious data from legitimate input. Risks include unintended actions, information leakage and long-term behavioural influence.

NITDA urged users and organisations in Nigeria to apply updates promptly and limit browsing or memory features when not required. The agency said that exposing AI systems to external tools increases their attack surface and demands stronger safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Toyota and NTT push for accident free mobility

NTT and Toyota have expanded their partnership with a new initiative aimed at advancing safer mobility and reducing traffic accidents. The firms announced a Mobility AI Platform that combines high-quality communications, distributed computing and AI to analyse large volumes of data.

Toyota intends to use the platform to support software-defined vehicles, enabling continuous improvements in safety through data-driven automated driving systems.

The company plans to update its software and electronics architecture so vehicles can gather essential information and receive timely upgrades, strengthening both safety and security.

The platform will use three elements: distributed data centres, intelligent networks and an AI layer that learns from people, vehicles and infrastructure. As software-defined vehicles rise, Toyota expects a sharp increase in data traffic and a greater need for processing capacity.

Development will begin in 2025 with an investment of around 500 billion yen. Public trials are scheduled for 2028, followed by wider introduction from 2030.

Both companies hope to attract additional partners as they work towards a more connected and accident-free mobility ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK lawmakers push for binding rules on advanced AI

Growing political pressure is building in Westminster as more than 100 parliamentarians call for binding regulation on the most advanced AI systems, arguing that current safeguards lag far behind industry progress.

A cross-party group, supported by former defence and AI ministers, warns that unregulated superintelligent models could threaten national and global security.

The campaign, coordinated by Control AI and backed by tech figures including Skype co-founder Jaan Tallinn, urges Prime Minister Keir Starmer to distance the UK from the US stance against strict federal AI rules.

Experts such as Yoshua Bengio and senior peers argue that governments remain far behind AI developers, leaving companies to set the pace with minimal oversight.

Calls for action come after warnings from frontier AI scientists that the world must decide by 2030 whether to allow highly advanced systems to self-train.

Campaigners want the UK to champion global agreements limiting superintelligence development, establish mandatory testing standards and introduce an independent watchdog to scrutinise AI use in the public sector.

Government officials maintain that AI is already regulated through existing frameworks, though critics say the approach lacks urgency.

Pressure is growing for new, binding rules on the most powerful models, with advocates arguing that rapid advances mean strong safeguards may be needed within the next two years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU ministers call for faster action on digital goals

European ministers have adopted conclusions aimed to boosting the Union’s digital competitiveness, urging quicker progress toward the 2030 digital decade goals.

Officials called for stronger digital skills, wider adoption of technology, and a framework that supports innovation while protecting fundamental rights. Digital sovereignty remains a central objective, framed as open, risk-based and aligned with European values.

Ministers supported simplifying digital rules for businesses, particularly SMEs and start-ups, which face complex administrative demands. A predictable legal environment, less reporting duplication and more explicit rules were seen as essential for competitiveness.

Governments emphasised that simplification must not weaken data protection or other core safeguards.

Concerns over online safety and illegal content were a prominent feature in discussions on enforcing the Digital Services Act. Ministers highlighted the presence of harmful content and unsafe products on major marketplaces, calling for stronger coordination and consistent enforcement across member states.

Ensuring full compliance with EU consumer protection and product safety rules was described as a priority.

Cyber-resilience was a key focus as ministers discussed the increasing impact of cyberattacks on citizens and the economy. Calls for stronger defences grew as digital transformation accelerated, with several states sharing updates on national and cross-border initiatives.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia introduces new codes to protect children online

Australian regulators have released new guidance ahead of the introduction of industry codes designed to protect children from exposure to harmful online material.

The Age Restricted Material Codes will apply to a wide range of online services, including app stores, social platforms, equipment providers, pornography sites and generative AI services, with the first tranche beginning on 27 December.

The rules require search engines to blur image results involving pornography or extreme violence to reduce accidental exposure among young users.

Search services must also redirect people seeking information related to suicide, self-harm or eating disorders to professional mental health support instead of allowing harmful spirals to unfold.

eSafety argues that many children unintentionally encounter disturbing material at very young ages, often through search results that act as gateways rather than deliberate choices.

The guidance emphasises that adults will still be able to access unblurred material by clicking through, and there is no requirement for Australians to log in or identify themselves before searching.

eSafety maintains that the priority lies in shielding children from images and videos they cannot cognitively process or forget once they have seen them.

These codes will operate alongside existing standards that tackle unlawful content and will complement new minimum age requirements for social media, which are set to begin in mid-December.

Authorities in Australia consider the reforms essential for reducing preventable harm and guiding vulnerable users towards appropriate support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!