Real-time journalism becomes central to Meta AI strategy

Meta has signed commercial agreements with news publishers to feed real-time reporting into Meta AI, enabling its chatbot to answer news-related queries with up-to-date information from multiple editorial sources.

The company said responses will include links to full articles, directing users to publishers’ websites and helping partners reach new audiences beyond traditional platform distribution.

Initial partners span US and international outlets, covering global affairs, politics, entertainment, and sports, with Meta signalling that additional publishing deals are in the works.

The shift marks a recalibration. Meta previously reduced its emphasis on news across Facebook and ended most publisher payments, but now sees licensed reporting as essential to improving AI accuracy and relevance.

Facing intensifying competition in the AI market, Meta is positioning real-time journalism as a differentiator for its chatbot, which is available across its apps and to users worldwide.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Creatives warn that AI is reshaping their jobs

AI is accelerating across creative fields, raising concerns among workers who say the technology is reshaping livelihoods faster than anyone expected.

A University of Cambridge study recently found that more than two-thirds of creative professionals fear AI has undermined their job security, and many now describe the shift as unavoidable.

One of them is Norwich-based artist Aisha Belarbi, who says the rise of image-generation tools has made commissions harder to secure as clients ‘can just generate whatever they want’. Although she works in both traditional and digital media, Belarbi says she increasingly struggles to distinguish original art from AI output. That uncertainty, she argues, threatens the value of lived experience and the labour behind creative work.

Others are embracing the change. Videographer JP Allard transformed his Milton Keynes production agency after discovering the speed and scale of AI-generated video. His company now produces multilingual ‘digital twins’ and fully AI-generated commercials, work he says is quicker and cheaper than traditional filming. Yet he acknowledges that the pace of change can leave staff behind and says retraining has not kept up with the technology.

For musician Ross Stewart, the concern centres on authenticity. After listening to what he later discovered was an AI-generated blues album, he questioned the impact of near-instant song creation on musicians’ livelihoods and exposure. He believes audiences will continue to seek human performance, but worries that the market for licensed music is already shifting towards AI alternatives.

Copywriter Niki Tibble has experienced similar pressures. Returning from maternity leave, she found that AI tools had taken over many entry-level writing tasks. While some clients still prefer human writers for strategy, nuance and brand voice, Tibble’s work has increasingly shifted toward reviewing and correcting AI-generated copy. She says the uncertainty leaves her unsure whether her role will exist in a decade.

Across these stories, creative workers describe a sector in rapid transition. While some see new opportunities, many fear the speed of adoption and a future where AI replaces the very work that has long defined their craft.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude Code expands automated AI fine tuning for businesses

Anthropic’s Claude Code now supports automated fine-tuning of open-source AI models, significantly widening access to advanced customisation for small-to-medium-sized (SMB) businesses. The new capability allows companies to train personalised systems using their own data without needing specialised technical expertise.

Claude Code’s hf-llm-trainer skill manages everything from hardware selection to authentication and training optimisation, simplifying what was once a highly complex workflow. Early accounts suggest the process can cost only a few cents, lowering barriers for firms seeking tailored AI solutions.

Businesses can now use customer logs, product manuals or internal documents to build AI models adapted to their operations, enabling improved support tools and content workflows. Many analysts view the advance as a major step in giving SMBs affordable access to company-specific AI that previously required substantial investment.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Pudu showcases next-generation D5 robot dog at Tokyo exhibition

Pudu Robotics’ latest showcase in Tokyo reflects its ambition to strengthen its global footprint with the debut of the D5 robot dog. The four-legged machine demonstrated stable stair-descent, smooth mobility and autonomous obstacle avoidance during the IREX exhibition.

Equipped with Nvidia’s Orin chip, fisheye cameras and dual lidar units, the D5 is engineered for inspection, monitoring and delivery tasks across demanding environments. Pudu highlights the robot’s resilience, crediting its in-house joint modules and motors for improved precision and durability.

Growth across the service-robot sector continues to accelerate, supported by falling manufacturing costs in China and wider industry adoption. Pudu, which has surpassed 100,000 global sales, is now steering development towards specialised and humanoid forms as it prepares for an IPO.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New interview study tracks how workers adapt to AI

Anthropic has unveiled Anthropic Interviewer, an AI-driven tool for large-scale workplace interviews. The system used Claude to conduct 1,250 structured interviews with professionals across the general workforce, creative fields and scientific research.

In surveys, 86 percent said AI saves time and 65 percent felt satisfied with its role at work. Workers often hoped to automate routine tasks while preserving responsibilities that define their professional identity.

Creative workers reported major time savings and quality gains yet faced stigma and economic anxiety around AI use. Many hid AI tools from colleagues, feared market saturation and still insisted on retaining creative control.

Across groups, professionals imagined careers where humans oversee AI systems rather than perform every task themselves. Anthropic plans to keep using Anthropic Interviewer to track attitudes and inform future model design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan weighs easing rules on personal data use

Japan is preparing to relax restrictions on personal data use to support rapid AI development. Government sources say a draft bill aims to expand third-party access to sensitive information.

Plans include allowing medical histories and criminal records to be obtained without consent for statistical purposes. Japanese officials argue such access could accelerate research while strengthening domestic competitiveness.

New administrative fines would target companies that profit from unlawfully acquired data affecting large groups. Penalties would match any gains made through misconduct, reflecting growing concern over privacy abuses.

A government panel has reviewed the law since 2023 and intends to present reforms soon. Debate is expected to intensify as critics warn of increased risks to individual rights if support for AI development in this regard continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia seals $4.6 billion deal for new AI hub

OpenAI has partnered with Australian data centre operator NextDC to build a major AI campus in western Sydney. The companies signed an agreement covering development, planning and long-term operation of the vast site.

NextDC said the project will include a supercluster of graphics processors to support advanced AI workloads. Both firms intend to create infrastructure capable of meeting rapid global demand for high-performance computing.

Australia estimates the development at A$7 billion and forecasts thousands of jobs during construction and ongoing roles across engineering and operations. Officials say the initiative aligns with national efforts to strengthen technological capability.

Plans feature renewable energy procurement and cooling systems that avoid drinking water use, addressing sustainability concerns. Treasurer Jim Chalmers said the project reflects growing confidence in Australia’s talent, clean energy capacity and emerging AI economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tilly Norwood creator accelerates AI-first entertainment push

The AI talent studio behind synthetic actress Tilly Norwood is preparing to expand what it calls the ‘Tilly-verse’, moving into a new phase of AI-first entertainment built around multiple digital characters.

Xicoia, founded by Particle6 and Tilly creator Eline van der Velden, is recruiting for 9 roles spanning writing, production, growth, and AI development, including a junior comedy writer, a social media manager, and a senior ‘AI wizard-in-chief’.

The UK-based studio says the hires will support Tilly’s planned 2026 expansion into on-screen appearances and direct fan interaction, alongside the introduction of new AI characters designed to coexist within the same fictional universe.

Van der Velden argues the project creates jobs rather than replacing them, positioning the studio as a response to anxieties around AI in entertainment and rejecting claims that Tilly is meant to displace human performers.

Industry concerns persist, however, with actors’ representatives disputing whether synthetic creations can be considered performers at all and warning that protecting human artists’ names, images, and likenesses remains critical as AI adoption accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan aims to boost public AI use

Japan has drafted a new basic programme aimed at dramatically increasing public use of AI, with a target of raising utilisation from 50% to 80%. The government hopes the policy will strengthen domestic AI capabilities and reduce reliance on foreign technologies.

To support innovation, authorities plan to attract roughly ¥1 trillion in private investment, funding research, talent development and the expansion of AI businesses into emerging markets. Officials see AI as a core social infrastructure that supports both intellectual and practical functions.

The draft proposes a unified AI ecosystem where developers, chip makers and cloud providers collaborate to strengthen competitiveness and reduce Japan’s digital trade deficit. AI adoption is also expected to extend across all ministries and government agencies.

Prime Minister Sanae Takaichi has pledged to make Japan the easiest country in the world for AI development and use. The Cabinet is expected to approve the programme before the end of the year, paving the way for accelerated research and public-private investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NITDA warns of prompt injection risks in ChatGPT models

Nigeria’s National Information Technology Development Agency (NITDA) has issued an urgent advisory on security weaknesses in OpenAI’s ChatGPT models. The agency warned that flaws affecting GPT-4o and GPT-5 could expose users to data leakage through indirect prompt injection.

According to NITDA’s Computer Emergency Readiness and Response Team, seven critical flaws were identified that allow hidden instructions to be embedded in web content. Malicious prompts can be triggered during routine browsing, search or summarisation without user interaction.

The advisory warned that attackers can bypass safety filters, exploit rendering bugs and manipulate conversation context. Some techniques allow injected instructions to persist across future interactions by interfering with the models’ memory functions.

While OpenAI has addressed parts of the issue, NITDA said large language models still struggle to reliably distinguish malicious data from legitimate input. Risks include unintended actions, information leakage and long-term behavioural influence.

NITDA urged users and organisations in Nigeria to apply updates promptly and limit browsing or memory features when not required. The agency said that exposing AI systems to external tools increases their attack surface and demands stronger safeguards.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!