AI shows promise in scientific research tasks

FrontierScience, a new benchmark from OpenAI, evaluates AI capabilities for expert-level scientific reasoning across physics, chemistry, and biology.

The benchmark measures Olympiad-style reasoning and real-world research tasks, showing how AI can aid complex scientific workflows. Generative AI models like GPT‑5 are now used for literature searches, complex proofs, and tasks that once took days or weeks.

The benchmark consists of two tracks: FrontierScience-Olympiad, with 100 questions created by international Olympiad medalists to assess constrained scientific reasoning, and FrontierScience-Research, with 60 multi-step research tasks developed by PhD scientists.

Initial evaluations show GPT‑5.2 scoring 77% on the Olympiad set and 25% on the Research set, outperforming other frontier models. The results show AI can support structured scientific reasoning but still struggles with open-ended problem solving and hypothesis generation.

FrontierScience also introduces a grading system tailored to each track. The Olympiad set uses short-answer verification, while the Research set employs a 10-point rubric assessing both final answers and intermediate reasoning steps.

Model-based grading allows for scalable evaluation of complex tasks, although human expert oversight remains ideal. Analyses reveal that AI models still make logic, calculation, and factual errors, particularly with niche scientific concepts.

While FrontierScience does not capture every aspect of scientific work, it provides a high-resolution snapshot of AI performance on difficult, expert-level problems. OpenAI plans to refine the benchmark, extend it to new domains, and combine it with real-world tests to track AI’s impact on scientific discovery.

The ultimate measure of success remains the novel insights and discoveries AI can help generate for the scientific community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Streaming platforms face pressure over AI-generated music

Musicians are raising the alarm over AI-generated tracks appearing on their profiles without consent, presenting fraudulent work as their own. British folk artist Emily Portman discovered an AI-generated album, Orca, on Spotify and Apple Music, which copied her folk style and lyrics.

Fans initially congratulated her on a release she had not made since 2022.

Australian musician Paul Bender reported a similar experience, with four ‘bizarrely bad’ AI tracks appearing under his band, The Sweet Enoughs. Both artists said that weak distributor security allows scammers to easily upload content, calling it ‘the easiest scam in the world.’

A petition launched by Bender garnered tens of thousands of signatures, urging platforms to strengthen their protections.

AI-generated music has become increasingly sophisticated, making it nearly impossible for listeners to distinguish from genuine tracks. While revenues from such fraudulent streams are low individually, bots and repeated listening can significantly increase payouts.

Industry representatives note that the primary motive is to collect royalties from unsuspecting users.

Despite the threat of impersonation, Portman is continuing her creative work, emphasising human collaboration and authentic artistry. Spotify and Apple Music have pledged to collaborate with distributors to enhance the detection and prevention of AI-generated fraud.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New law requires AI disclosure in advertising in the US

A new law in New York, US, will require advertisers to disclose when AI-generated people appear in commercial content. Governor Kathy Hochul said the measure brings transparency and protects consumers as synthetic avatars become more widespread.

A second law now requires consent from heirs or executors when using a deceased person’s likeness for commercial purposes. The rule updates the state’s publicity rights, which previously lacked clarity in the context of the generative AI era.

Industry groups welcomed the move, saying it addresses the risks posed by unregulated AI usage, particularly for actors in the film and television industries. The disclosure must be conspicuous when an avatar does not correspond to a real human.

Specific expressive works such as films, games and shows are exempt when the avatar matches its use in the work. The laws arrive as national debate intensifies and President-elect Donald Trump signals potential attempts to limit state-level AI regulation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

India moves toward mandatory AI royalty regime

India is weighing a sweeping copyright framework that would require AI companies to pay royalties for training on copyrighted works under a mandatory blanket licence branded as the hybrid ‘One Nation, One Licence, One Payment’ model.

A new Copyright Royalties Collective for AI Training, or CRCAT, would collect payments from developers and distribute money to creators. AI firms would have to rely only on lawfully accessed material and file detailed summaries of training datasets, including data types and sources.

The panel is expected to favour flat, revenue-linked percentages on global earnings from commercial AI systems, reviewed roughly every three years and open to legal challenge in court.

Obligations would apply retroactively to AI developers that have already trained profitable models on copyright-protected material, framed by Indian policymakers as a corrective measure for the creative ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vietnam passes first AI law to strict safeguards

Vietnam’s National Assembly has passed its first AI Law, advancing the regulation and development of AI nationwide. The legislation was approved with overwhelming support, alongside amendments to the Intellectual Property Law and a revised High Technology Law.

The AI Law will take effect on March 1, 2026.

The law establishes core principles, prohibits certain acts, and outlines a risk management framework for AI systems. The law combines safeguards for high-risk AI with incentives for innovation, including sandbox testing, a National AI Development Fund, and startup vouchers.

AI oversight will be centralised under the Government, led by the Ministry of Science and Technology, with assessments needed only for high-risk systems approved by the Prime Minister. The law allows real-time updates to this list to keep pace with technological advances.

Flexible provisions prevent obsolescence by avoiding fixed technology lists or rigid risk classifications. Lawmakers emphasised the balance between regulation and innovation, aiming to create a safe yet supportive environment for AI growth in Vietnam.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google faces scrutiny over AI use of online content

The European Commission has opened an antitrust probe into Google over concerns it used publisher and YouTube content to develop its AI services on unfair terms.

Regulators are assessing whether Google used its dominant position to gain unfair access to content powering features like AI Overviews and AI Mode. They are examining whether publishers were disadvantaged by being unable to refuse use of their content without losing visibility on Google Search.

The probe also covers concerns that YouTube creators may have been required to allow the use of their videos for AI training without compensation, while rival AI developers remain barred from using YouTube content.

The investigation will determine whether these practices breached EU rules on abuse of dominance under Article 102 TFEU. Authorities intend to prioritise the case, though no deadline applies.

Google and national competition authorities have been formally notified as the inquiry proceeds.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Intellectual property laws in Azerbaijan adapts to AI challenges

Azerbaijan is preparing to update its intellectual property legislation to address the growing impact of artificial intelligence. Kamran Imanov, Chairman of the Intellectual Property Agency, highlighted that AI raises complex questions about authorship, invention, and human–AI collaboration that current laws cannot fully resolve.

The absence of legal personality for AI creates challenges in defining rights and responsibilities, prompting a reassessment of both national and international legal norms. Imanov underlined that reforming intellectual property rules is essential for fostering innovation while protecting creators’ rights.

Recent initiatives, including the adoption of a national AI strategy and the establishment of the Artificial Intelligence Academy, demonstrate Azerbaijan’s commitment to building a robust governance framework for emerging technologies. The country is actively prioritising AI regulation to guide ethical development and usage.

The Intellectual Property Agency, in collaboration with the World Intellectual Property Organization, recently hosted an international conference in Baku on intellectual property and AI. Experts from around the globe convened to discuss the challenges and opportunities posed by AI in the legal protection of inventions and creative works.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Survey reveals split views on AI in academic peer review

Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.

Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.

Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.

A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.

Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.

Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.

Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Tilly Norwood creator accelerates AI-first entertainment push

The AI talent studio behind synthetic actress Tilly Norwood is preparing to expand what it calls the ‘Tilly-verse’, moving into a new phase of AI-first entertainment built around multiple digital characters.

Xicoia, founded by Particle6 and Tilly creator Eline van der Velden, is recruiting for 9 roles spanning writing, production, growth, and AI development, including a junior comedy writer, a social media manager, and a senior ‘AI wizard-in-chief’.

The UK-based studio says the hires will support Tilly’s planned 2026 expansion into on-screen appearances and direct fan interaction, alongside the introduction of new AI characters designed to coexist within the same fictional universe.

Van der Velden argues the project creates jobs rather than replacing them, positioning the studio as a response to anxieties around AI in entertainment and rejecting claims that Tilly is meant to displace human performers.

Industry concerns persist, however, with actors’ representatives disputing whether synthetic creations can be considered performers at all and warning that protecting human artists’ names, images, and likenesses remains critical as AI adoption accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Japan aims to boost public AI use

Japan has drafted a new basic programme aimed at dramatically increasing public use of AI, with a target of raising utilisation from 50% to 80%. The government hopes the policy will strengthen domestic AI capabilities and reduce reliance on foreign technologies.

To support innovation, authorities plan to attract roughly ¥1 trillion in private investment, funding research, talent development and the expansion of AI businesses into emerging markets. Officials see AI as a core social infrastructure that supports both intellectual and practical functions.

The draft proposes a unified AI ecosystem where developers, chip makers and cloud providers collaborate to strengthen competitiveness and reduce Japan’s digital trade deficit. AI adoption is also expected to extend across all ministries and government agencies.

Prime Minister Sanae Takaichi has pledged to make Japan the easiest country in the world for AI development and use. The Cabinet is expected to approve the programme before the end of the year, paving the way for accelerated research and public-private investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!