Singapore cooperation with Japan targets AI in patent examination

The Intellectual Property Office of Singapore and the Japan Patent Office have announced a new cooperation initiative on the use of AI in patent substantive examination, as patent offices adapt to rapid technological change.

The initiative was announced after a bilateral meeting in Singapore between IPOS Chief Executive Tan Kong Hwee and JPO Commissioner Yasuyuki Kasai. It builds on a Memorandum of Cooperation signed in Tokyo last November.

Under the initiative, IPOS and JPO will launch a bilateral patent examiner exchange programme and hold regular technical exchanges on the use of AI in patent examination. The two offices said the cooperation is intended to strengthen capabilities, share best practices and develop robust processes for high-quality and trusted patent examination.

Tan said AI is reshaping innovation and work processes, making it necessary for IP offices to evolve while maintaining examination quality and trust. Kasai said the cooperation would bring together the experience and expertise of both offices and support innovation in both countries.

The cooperation will also cover patent search and examination quality management, benchmarking of examination practices, IT infrastructure development, operational management and IP policy exchanges. Both offices will also coordinate initiatives to support enterprises, including SMEs, and strengthen trade and IP flows between Singapore and Japan.

IPOS and JPO said the partnership reflects their shared commitment to addressing emerging challenges in the intellectual property landscape and keeping innovation ecosystems trusted, efficient and future-ready.

Why does it matter?

Patent offices are increasingly facing pressure to handle more complex applications while maintaining examination quality, consistency and trust. Cooperation between Singapore and Japan on AI-assisted examination shows how intellectual property authorities are beginning to adapt their own administrative systems to AI, not only to regulate AI-related inventions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Swiss media groups launch responsible AI journalism framework

Swiss media organisations have adopted a national code of conduct for the responsible use of AI, aiming to strengthen transparency, copyright protection and public trust in journalism.

The initiative is backed by major Swiss publishing groups, private radio and television organisations, the Swiss Broadcasting Corporation and the national news agency Keystone-ATS. It is based on the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law.

The code states that media companies and their employees remain responsible for all published editorial content, whether produced by journalists or with the support of AI systems. It also commits media organisations to train staff in AI use, respect copyright, follow data protection rules and take steps to prevent the spread of false information.

Swiss media groups also agreed to inform the public transparently about their use of AI, including through dedicated information pages, and to introduce binding marking obligations for AI-supported content. The framework is designed as a self-regulatory tool at a time when public concern over AI-generated content remains high.

To support implementation, the code provides for a two-tier reporting and control mechanism. The relevant departments within media companies will first handle questions and complaints. In contrast, an independent AI ombudsperson will act as a second instance for serious or unresolved cases and publish an annual report.

Swiss President Guy Parmelin said AI could strengthen journalism if used responsibly and transparently, while warning that fake news threatens journalistic credibility and social cohesion. Legislative changes needed to implement the Council of Europe convention in Switzerland are expected by the end of 2026.

Why does it matter?

The Swiss code shows how media organisations are moving to set AI governance standards before legal obligations fully take shape. Its significance lies in linking AI-assisted journalism with editorial responsibility, transparency, copyright, data protection and complaint mechanisms, rather than treating AI labelling as the only issue. The model could influence how other media sectors balance innovation with public trust and accountability.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

The Academy introduces rules excluding AI-generated work from Oscar eligibility

The Academy’s Board of Governors has introduced new rules excluding AI-generated performances and screenplays from eligibility for the Oscars. The updated rules require that recognised work be created and performed by humans.

Under the updated framework, only performances credited in a film’s legal billing and demonstrably carried out by individuals with their consent will qualify for an Oscar. Screenplays must also be authored by humans, with the academy reserving the right to request further disclosure on the use of AI in production.

The update comes as AI technologies are increasingly used in filmmaking, including digital recreations of actors and synthetic performers. Industry tensions around AI have grown in recent years, including during the 2023 writers’ and actors’ strikes.

The move is described as part of efforts within the creative sector to preserve human authorship and artistic control as generative AI tools expand across media production.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Intellectual property cooperation launched under EU-Japan IP Action

The European Union Intellectual Property Office has launched the EU-Japan IP Action in Tokyo, marking the first dedicated intellectual property cooperation project between the European Union and Japan.

The initiative is intended to strengthen the protection and promotion of intellectual property rights through technical cooperation, policy dialogue, and industry engagement. The launch also highlighted how AI is reshaping innovation, competition, and IP enforcement in the digital environment.

EUIPO Executive Director João Negrão said: ‘Today’s event marks a milestone: the official launch of the EUJapan IP Action. As the first dedicated cooperation project on intellectual property between our two regions, organised by the EUIPO and co-funded by the European Union, it carries real promise – for trade, for innovation, and for growth on both sides.’

The launch brought together officials from the EU and Japan, including representatives of the Japan Patent Office and Japan’s Intellectual Property Strategy Headquarters. Speakers described the initiative as a new phase of cooperation focused on streamlining IP processes and ensuring that legal frameworks keep pace with industrial and technological change.

A panel discussion examined the impact of AI and large language models on intellectual property, including questions of authorship, ownership of AI-generated inventions, and copyright enforcement. Industry representatives also discussed practical challenges related to AI governance and anti-piracy.

The event continued with a conference on generative AI, where participants from business, government, and academia examined how IP frameworks should respond to AI-driven change. Discussions included compensation for creators whose works are used in AI training, alongside legal, contractual, and technical mechanisms that could support that goal. Creative sectors, including manga, animation, music, and video games, were also part of the discussion.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Generative AI policy updated by Australian Research Council

The Australian Research Council has updated its policy on the use of generative AI in its grants programmes, setting out how the rules apply to applicants, administering organisations, and assessors in the National Competitive Grants Program.

The revised policy has officially taken effect and applies to applications and assessments for Discovery Indigenous 2027 and all scheme rounds opening after that date.

The policy says applicants may use generative AI tools to support tasks such as testing ideas, improving language, and summarising text, but remain responsible for the content they submit and are considered the authors of that content.

Administering organisations are also responsible for ensuring that applications are complete, accurate, and free from false or misleading information, while delegated research leaders must certify that participants are responsible for the authorship and intellectual content of applications and that they have not infringed the intellectual property rights of others.

A notable change in the revised policy is that assessors are now permitted to use generative AI tools in limited ways. The ARC says assessors may use AI only to correct or improve grammar, spelling, formatting, and the readability of drafted assessments.

At the same time, the policy states that assessors must not use AI to help form an opinion on the quality of an application and must preserve the confidentiality of all application materials. Inputting any application material into public generative AI tools such as ChatGPT, Gemini, Claude, or Perplexity is described by the ARC as a serious breach of confidentiality and is not permitted.

The ARC also says assessors will be asked about their use of AI and must be transparent when requested. Where assessors’ inappropriate use of generative AI is suspected, the ARC may remove that assessment from the process. If a breach is established following investigation, the ARC may impose consequential actions in addition to any imposed by the assessor’s employing institution.

The revised policy explains that its approach is shaped by concerns including intellectual integrity and authorship, safeguarding intellectual property, culturally appropriate use of data, content reliability and bias, human oversight and expert judgement, and energy and environmental impacts. It also states that the ARC will continue to monitor developments in generative AI and update the policy as required.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Azerbaijan explores regulatory framework for AI and intellectual property

Azerbaijani lawmakers and experts discussed the legal status of AI systems and their implications for intellectual property (IP) at a policy roundtable in Baku, Trend News Agency reported.

Speaking at the event marking World Intellectual Property Day, Member of the Azerbeijani Parliament Hijran Huseynova said that defining the legal nature of AI remains a key issue as the technology advances.

Participants highlighted differing views on whether AI should be treated as a legal entity or regarded solely as a tool. While some experts argued that AI lacks independent legal standing, others suggested that its ability to make autonomous decisions requires deeper legal examination.

The discussion also addressed whether outputs generated by AI systems can qualify for patent protection, an issue that remains under debate in many jurisdictions.

Huseynova noted that the growing use of AI is raising complex questions about ownership and rights, as traditional intellectual property frameworks are based on human creativity.

Why does it matter?

The debate comes as Azerbaijan advances its national AI strategy for 2025–2028, which includes efforts to establish legal and institutional frameworks for the development and regulation of AI technologies. Officials say these measures aim to address emerging legal challenges and support the responsible adoption of AI as part of the country’s broader digital transformation agenda.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Georgia hosts EPO talks on patents and technology transfer

European Patent Office President António Campinos visited Tbilisi for high-level meetings and a joint conference with Georgia’s National Intellectual Property Center, Sakpatenti, focused on the role of patents in technology transfer.

During the visit, Campinos met Georgia’s Minister of Education, Science and Youth, Givi Mikanadze. Discussions covered the contribution of patent systems to economic development, innovation policy, international technology cooperation, and Georgia’s alignment with European patent practices.

The meetings also highlighted cooperation between the European Patent Office and Sakpatenti, including Georgia’s validation agreement with the EPO, which the statement says has resulted in more than 300 validation requests in two years. Mikanadze said:

The validation agreement supports IP development in Georgia by establishing an environment where knowledge transforms into innovation.

At the conference, titled ‘From Research to Impact: The Role of Patents in Technology Transfer’, Campinos said:

Technology transfer, foreign investment, and the development of new technologies depend on strong research, skilled intellectual property professionals, and solid legal frameworks. Patents and our validation agreement, by providing legal certainty, predictability, and clear professional standards, support researchers, universities, businesses of all sizes, and individual inventors in moving ideas from the laboratory to the market.

The programme also addressed professional qualifications and patent skills, with the EPO highlighting certification frameworks such as the European Qualifying Examination and the European Patent Administration Certification.

Why does it matter?

Stronger patent cooperation can affect how easily research moves into commercial use, how attractive a market is for technology investment, and how predictable protection is for innovators operating across borders. In Georgia, the validation agreement is presented as part of a broader effort to strengthen the country’s innovation ecosystem and its links with European patent practice.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

YouTube expands AI deepfake detection tools for celebrities

The expansion of its likeness detection technology to the entertainment industry has been announced by YouTube, extending access beyond content creators to talent agencies, management companies and the individuals they represent.

The move is part of a broader effort by the platform to address the growing misuse of AI to generate misleading or unauthorised videos of public figures. By extending the tool to entertainment industry stakeholders, YouTube is signalling that AI-driven impersonation is no longer treated as a niche creator issue but as a broader identity and rights problem.

The system works in a way broadly comparable to Content ID, allowing eligible users to identify videos that use AI to replicate a person’s face or likeness. Once such content is detected, individuals can request its removal through YouTube’s existing privacy complaint process.

The rollout has been developed with input from major industry players, including Creative Artists Agency, United Talent Agency, William Morris Endeavor, and Untitled Management. Those partnerships are intended to help YouTube refine how the system works in practice and ensure it reflects the needs of artists and rights holders dealing with synthetic media.

Importantly, access to the tool is not limited to people who actively run YouTube channels. Celebrities and public figures can use it even without a direct creator presence on the platform, extending its reach across a much broader part of the entertainment ecosystem.

The significance of the update lies in how platforms are beginning to treat AI impersonation as a governance issue rather than merely a content-moderation problem.

As synthetic media tools become easier to use and more convincing, technology companies are under growing pressure to provide faster and more credible mechanisms for detecting misuse, protecting identity rights, and limiting deceptive content.

YouTube’s latest move shows that platform responses are becoming more structured and rights-based, especially in sectors where a person’s likeness is closely tied to reputation, image, and commercial value. The bigger question now is whether such tools will prove effective enough to keep pace with the scale and speed of AI-generated impersonation online.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU updates technology licensing competition rules to reflect data and digital markets

The European Commission has adopted revised rules governing technology transfer agreements (Technology Transfer Block Exemption Regulation and Guidelines on the application of Article 101 of the Treaty to technology transfer agreements), updating a framework originally introduced in 2014.

These changes aim to reflect developments in the digital economy, particularly the growing role of data and standardised technologies in enabling interoperability across markets.

Technology transfer agreements allow firms to license intellectual property such as patents, software and design rights, supporting the dissemination of innovation. While such agreements are often considered pro-competitive, they may also create risks if they restrict market access or distort competition.

The revised framework clarifies how these agreements are assessed under Article 101 of the Treaty on the Functioning of the European Union.

The updated rules introduce specific guidance on data licensing and licensing negotiation groups, addressing new market practices.

They also refine conditions under which agreements benefit from exemptions, including simplified criteria for early-stage technologies and clearer safeguards for technology pools linked to industry standards.

Overall, the revision by the EU seeks to improve legal certainty for businesses while ensuring that licensing practices support innovation, competition and the broader functioning of the single market. The new framework will apply from May 2026.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

EU advances AI copyright safeguards through GPAI taskforce discussions

The European Commission has convened the second meeting of the Signatory Taskforce under the General-Purpose AI Code of Practice (GPAI), focusing on copyright protection in AI systems.

The discussion brought together signatories to exchange early implementation practices and technical approaches.

Participants examined methods to reduce copyright risks in AI-generated outputs, highlighting measures applied across the model’s lifecycle, including data selection, training, and deployment.

Emphasis was placed on combining technical safeguards with organisational processes to improve transparency and effectiveness.

One approach presented involved training models on licensed content alongside attribution systems to identify similarities between generated outputs and source material. Such a method aims to support fair remuneration and strengthen accountability within AI development.

The meeting also addressed mechanisms for handling complaints from rights holders, with participants discussing procedures for accessible and timely responses.

An exchange that forms part of ongoing EU efforts to refine governance standards for AI systems and copyright compliance.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!