Survey reveals split views on AI in academic peer review

Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.

Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.

Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.

A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.

Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.

Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.

Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tilly Norwood creator accelerates AI-first entertainment push

The AI talent studio behind synthetic actress Tilly Norwood is preparing to expand what it calls the ‘Tilly-verse’, moving into a new phase of AI-first entertainment built around multiple digital characters.

Xicoia, founded by Particle6 and Tilly creator Eline van der Velden, is recruiting for 9 roles spanning writing, production, growth, and AI development, including a junior comedy writer, a social media manager, and a senior ‘AI wizard-in-chief’.

The UK-based studio says the hires will support Tilly’s planned 2026 expansion into on-screen appearances and direct fan interaction, alongside the introduction of new AI characters designed to coexist within the same fictional universe.

Van der Velden argues the project creates jobs rather than replacing them, positioning the studio as a response to anxieties around AI in entertainment and rejecting claims that Tilly is meant to displace human performers.

Industry concerns persist, however, with actors’ representatives disputing whether synthetic creations can be considered performers at all and warning that protecting human artists’ names, images, and likenesses remains critical as AI adoption accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan aims to boost public AI use

Japan has drafted a new basic programme aimed at dramatically increasing public use of AI, with a target of raising utilisation from 50% to 80%. The government hopes the policy will strengthen domestic AI capabilities and reduce reliance on foreign technologies.

To support innovation, authorities plan to attract roughly ¥1 trillion in private investment, funding research, talent development and the expansion of AI businesses into emerging markets. Officials see AI as a core social infrastructure that supports both intellectual and practical functions.

The draft proposes a unified AI ecosystem where developers, chip makers and cloud providers collaborate to strengthen competitiveness and reduce Japan’s digital trade deficit. AI adoption is also expected to extend across all ministries and government agencies.

Prime Minister Sanae Takaichi has pledged to make Japan the easiest country in the world for AI development and use. The Cabinet is expected to approve the programme before the end of the year, paving the way for accelerated research and public-private investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Waterstones open to selling AI-generated books, but only with clear labelling

Waterstones CEO James Daunt has stated that the company is willing to stock books created using AI, provided the works are transparently labelled, and there is genuine customer demand.

In an interview on the BBC’s Big Boss podcast, Daunt stressed that Waterstones currently avoids placing AI-generated books on shelves and that his instinct as a bookseller is to ‘recoil’ from such titles. However, he emphasised that the decision ultimately rests with readers.

Daunt described the wider surge in AI-generated content as largely unsuitable for bookshops, saying most such works are not of a type Waterstones would typically sell. The publishing industry continues to debate the implications of generative AI, particularly around threats to authors’ livelihoods and the use of copyrighted works to train large language models.

A recent University of Cambridge survey found that more than half of published authors fear being replaced by AI, and two-thirds believe their writing has been used without permission to train models.

Despite these concerns, some writers are adopting AI tools for research or editing, while AI-generated novels and full-length works are beginning to emerge.

Daunt noted that Waterstones would consider carrying such titles if readers show interest, while making clear that the chain would always label AI-authored works to avoid misleading consumers. He added that readers tend to value the human connection with authors, suggesting that AI books are unlikely to be prominently featured in stores.

Daunt has led Waterstones since 2011, reshaping the chain by decentralising decision-making and removing the longstanding practice of publishers paying for prominent in-store placement. He also currently heads Barnes & Noble in the United States.

With both chains now profitable, Daunt acknowledged that a future share flotation is increasingly likely. However, no decision has been taken on whether London or New York would host any potential IPO.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Will the AI boom hold or collapse?

Global investment in AI has soared to unprecedented heights, yet the technology’s real-world adoption lags far behind the market’s feverish expectations. Despite trillions of dollars in valuations and a global AI market projected to reach nearly $5 trillion by 2033, mounting evidence suggests that companies struggle to translate AI pilots into meaningful results.

As Jovan Kurbalija argues in his recent analysis, hype has outpaced both technological limits and society’s ability to absorb rapid change, raising the question of whether the AI bubble is nearing a breaking point.

Kurbalija identifies several forces inflating the bubble, such as relentless media enthusiasm that fuels fear of missing out, diminishing returns on ever-larger computing power, and the inherent logical constraints of today’s large language models, which cannot simply be ‘scaled’ into human-level intelligence.

At the same time, organisations are slow to reorganise workflows, regulations, and skills around AI, resulting in high failure rates for corporate initiatives. A new competitive landscape, driven by ultra-low-cost open-source models such as China’s DeepSeek, further exposes the fragility of current proprietary spending and the vast discrepancies in development costs.

Looking forward, Kurbalija outlines possible futures ranging from a rational shift toward smaller, knowledge-centric AI systems to a world in which major AI firms become ‘too big to fail’, protected by government backstops similar to the 2008 financial crisis. Geopolitics may also justify massive public spending as the US and China frame AI leadership as a national security imperative.

Other scenarios include a consolidation of power among a handful of tech giants or a mild ‘AI winter’ in which investment cools and attention pivots to the next frontier technologies, such as quantum computing or immersive digital environments.

Regardless of which path emerges, the defining battle ahead will centre on the open-source versus proprietary AI debate. Both Washington and Beijing are increasingly embracing open models as strategic assets, potentially reshaping global standards and forcing big tech firms to rethink their closed ecosystems.

As Kurbalija concludes, the outcome will depend less on technical breakthroughs and more on societal choices, balancing openness, competition, and security in shaping whether AI becomes a sustainable foundation of economic life or the latest digital bubble to deflate under its own weight.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Jorja Smith’s label challenges ‘AI clone’ vocals on viral track

A dispute has emerged after FAMM, the record label representing Jorja Smith, alleged that the viral dance track I Run by Haven used an unauthorised AI clone of the singer’s voice.

The BBC’s report describes how the song gained traction on TikTok before being removed from streaming platforms following copyright complaints.

The label said it wanted a share of royalties, arguing that both versions of the track, the original release and a re-recording with new vocals, infringed Smith’s rights and exploited the creative labour behind her catalogue.

FAMM said the issue was bigger than one artist, warning that fans had been misled and that unlabelled AI music risked becoming ‘the new normal’. Smith later shared the label’s statement, which characterised artists as ‘collateral damage’ in the race towards AI-driven production.

Producers behind “I Run” confirmed that AI was used to transform their own voices into a more soulful, feminine tone. Harrison Walker said he used Suno, generative software sometimes called the ‘ChatGPT for music’, to reshape his vocals, while fellow producer Waypoint admitted employing AI to achieve the final sound.

They maintain that the songwriting and production were fully human and shared project files to support their claim.

The controversy highlights broader tensions surrounding AI in music. Suno has acknowledged training its system on copyrighted material under the US ‘fair use’ doctrine, while record labels continue to challenge such practices.

Even as the AI version of I Run was barred from chart eligibility, its revised version reached the UK Top 40. At the same time, AI-generated acts such as Breaking Rust and hybrid AI-human projects like Velvet Sundown have demonstrated the growing commercial appeal of synthetic vocals.

Musicians and industry figures are increasingly urging stronger safeguards. FAMM said AI-assisted tracks should be clearly labelled, and added it would distribute any royalties to Smith’s co-writers in proportion to how much of her catalogue they contributed to, arguing that if AI relied on her work, so should any compensation.

The debate continues as artists push back more publicly, including through symbolic protests such as last week’s vinyl release of silent tracks, which highlighted fears over weakened copyright protections.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

DeepSeek launches AI model achieving gold-level maths scores

Chinese AI company DeepSeek has unveiled Math-V2, the first open-source AI model to achieve gold-level performance at the International Mathematical Olympiad.

The system, now available on GitHub and Hugging Face, allows developers to modify and deploy the model under a permissive license freely.

Math-V2 also excelled in the 2024 Chinese Mathematical Olympiad, demonstrating advanced reasoning and problem-solving capabilities. Unlike many AI systems, it features a self-verification process that enables it to check solutions even for problems without known answers.

The launch comes as US AI leaders, such as Google DeepMind and OpenAI, have achieved similar milestones with their proprietary models.

Open access to Math-V2 could democratise advanced mathematical tools, potentially accelerating scientific research and development globally.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek opens access to gold-level maths AI

Chinese AI firm DeepSeek has released the first open AI model capable of achieving gold-medal results at the International Mathematical Olympiad. Math-V2 is now freely available on Hugging Face and GitHub, allowing developers to repurpose it and run it locally.

Gold-level performance at the IMO is remarkably rare, with only a small share of human participants reaching the top tier. DeepSeek aims to make such advanced mathematical capabilities accessible to researchers and developers who previously lacked access to comparable systems.

The company said its model achieved gold-level scores in both this year’s Olympiad and the Chinese Mathematical Olympiad. The results relied on strong theorem-proving skills and a new ‘self-verification’ method for reasoning without known solutions.

Observers said the open release could lower barriers to advanced maths AI, while US firms keep their Olympiad-level systems restricted. Supporters of open-source development welcomed the move as a significant step toward democratising advanced scientific tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Warner Music partners with AI song generator Suno

A landmark agreement has been reached between Warner Music and AI music platform Suno, ending last year’s copyright lawsuit that accused the service of using artists’ work without permission.

Fans can now generate AI-created songs using the voices, names, and likenesses of Warner artists who opt in, offering a new way to engage with music.

The partnership will introduce new licensed AI models, including download limits and paid tiers, to prevent a flood of AI tracks on streaming platforms.

Suno has also acquired the live-music discovery platform Songkick, expanding its digital footprint and strengthening connections between AI music and live events.

Music industry experts say the deal demonstrates how AI innovation can coexist with artists’ rights, as the UK government continues consultations on intellectual property for AI.

Creators and policymakers are advocating opt-in frameworks to ensure artists are fairly compensated when their works are used to train AI models.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI chatbots misidentify images they created

Growing numbers of online users are turning to AI chatbots to verify suspicious images, yet many tools are failing to detect fakes they created themselves. AFP found several cases in Asia where AI systems labelled fabricated photos as authentic, including a viral image of former Philippine lawmaker Elizaldy Co.

The failures highlight a lack of genuine visual analysis in current models. Many models are primarily trained on language patterns, resulting to inconsistent decisions even when dealing with images generated by the same generative systems.

Investigations also uncovered similar misidentifications during unrest in Pakistan-administered Kashmir, where AI models wrongly validated synthetic protest images. A Columbia University review reinforced the trend, with seven leading systems unable to verify any of the ten authentic news photos.

Specialists argue that AI may assist professional fact-checkers but cannot replace them. They emphasise that human verification remains essential as AI-generated content becomes increasingly lifelike and continues to circulate widely across social media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot