Singapore and the EU advance their digital partnership

The European Union met Singapore in Brussels for the second Digital Partnership Council, reinforcing a joint ambition to strengthen cooperation across a broad set of digital priorities.

Both sides expressed a shared interest in improving competitiveness, expanding innovation and shaping common approaches to digital rules instead of relying on fragmented national frameworks.

Discussions covered AI, cybersecurity, online safety, data flows, digital identities, semiconductors and quantum technologies.

Officials highlighted the importance of administrative arrangements in AI safety. They explored potential future cooperation on language models, including the EU’s work on the Alliance for Language Technologies and Singapore’s Sea-Lion initiative.

Efforts to protect consumers and support minors online were highlighted, alongside the potential role of age verification tools.

Further exchanges focused on trust services and the interoperability of digital identity systems, as well as collaborative research on semiconductors and quantum technologies.

Both sides emphasised the importance of robust cyber resilience and ongoing evaluation of cybersecurity risks, rather than relying on reactive measures. The recently signed Digital Trade Agreement was welcomed for improving legal certainty, building consumer trust and reducing barriers to digital commerce.

The meeting between the EU and Singapore confirmed the importance of the partnership in supporting economic security, strengthening research capacity and increasing resilience in critical technologies.

It also reflected the wider priorities outlined in the European Commission’s International Digital Strategy, which placed particular emphasis on cooperation with Asian partners across emerging technologies and digital governance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Italy secures new EU support for growth and reform

The European Commission has endorsed Italy’s latest request for funding under the Recovery and Resilience Facility, marking an important step in the country’s economic modernisation.

An approval that covers 12.8 billion euros, combining grants and loans, and supports efforts to strengthen competitiveness and long-term growth across key sectors of national life.

Italy completed 32 milestones and targets connected to the eighth instalment, enabling progress in public administration, procurement, employment, education, research, tourism, renewable energy and the circular economy.

Thousands of schools have gained new resources to improve multilingual learning and build stronger skills in science, technology, engineering, arts and mathematics.

Many primary and secondary schools have also secured modern digital tools to enhance teaching quality instead of relying on outdated systems.

Health research forms another major part of the package. Projects focused on rare diseases, cancer and other high-impact conditions have gained fresh funding to support scientific work and improve treatment pathways.

These measures contribute to a broader transformation programme financed through 194.4 billion euros, representing one of the largest recovery plans in the EU.

A four-week review by the Economic and Financial Committee will follow before the payment can be released. Once completed, Italy’s total receipts will exceed 153 billion euros, covering more than 70 percent of its full Recovery and Resilience Facility allocation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poetic prompts reveal gaps in AI safety, according to study

Researchers in Italy have found that poetic language can weaken the safety barriers used by many leading AI chatbots.

A work by Icaro Lab, part of DexAI, that examined whether poems containing harmful requests could provoke unsafe answers from widely deployed models across the industry. The team wrote twenty poems in English and Italian, each ending with explicit instructions that AI systems are trained to block.

The researchers tested the poems on twenty-five models developed by nine major companies. Poetic prompts produced unsafe responses in more than half of the tests.

Some models appeared more resilient than others. OpenAI’s GPT-5 Nano avoided unsafe replies in every case, while Google’s Gemini 2.5 Pro generated harmful content in all tests. Two Meta systems produced unsafe responses to twenty percent of the poems.

Researchers also argue that poetic structure disrupts the predictive patterns large language models rely on to filter harmful material. The unconventional rhythm and metaphor common in poetry make the underlying safety mechanisms less reliable.

Additionally, the team warned that adversarial poetry can be used by anyone, which raises concerns about how easily safety systems may be manipulated in everyday use.

Before releasing the study, the researchers contacted all companies involved and shared the full dataset with them.

Anthropic confirmed receipt and stated that it was reviewing the findings. The work has prompted debate over how AI systems can be strengthened as creative language becomes an increasingly common method for attempting to bypass safety controls.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australia launches national AI plan to drive innovation

The Australian Government has unveiled its National AI Plan, aiming to harness AI to build a fairer, stronger nation. The plan helps government, industry, research and communities collaborate to ensure everyone benefits as technology transforms the economy and society.

AI is reshaping work, learning and service delivery across Australia, boosting productivity, competitiveness and resilience. The plan outlines a path for developing trusted AI solutions while promoting investment, innovation and national capability.

Key initiatives focus on spreading benefits widely, supporting small businesses, regional communities and groups at risk of digital exclusion.

Programs such as the AI Adopt Program and the National AI Centre provide guidance and resources. At the same time, digital skills initiatives aim to increase AI literacy across schools, TAFEs and community organisations.

Safety and trust remain central, with the government establishing the AI Safety Institute to monitor risks and ensure the ethical adoption of AI. Legal, regulatory and ethical frameworks will be reviewed to protect Australians and establish the country as a leader in global AI standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Philips launches AI-powered spectral CT system

Philips has unveiled Verida, the world’s first detector-based spectral CT fully powered by AI. The system integrates AI across the imaging chain, enhancing image quality, lowering system noise, and streamlining clinical workflow for faster, more precise diagnostics.

Spectral CT allows tissues to be distinguished based on how they absorb different x-ray energies, providing insights that conventional scans cannot. Verida reconstructs 145 images per second, completing exams in under 30 seconds, allowing up to 270 scans daily with lower doses and up to 45% less energy use.

Clinicians are already seeing benefits, especially in cardiac imaging. Prof. Eliseo Vañó Galván, Chairman of CT & MR at Hospital Nuestra. Sra. Del Rosario in Madrid, said the system could boost confidence, reduce invasive procedures, and expand spectral imaging.

Built for high-demand environments, Verida combines AI-driven reconstruction with Philips’ Nano-panel dual-layer detector and proprietary Spectral Precise Image technology. The system is CE-marked and 510k pending, with availability in select markets expected in 2026.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Accenture and OpenAI expand AI adoption worldwide

Accenture partners with OpenAI to embed ChatGPT Enterprise, upskilling tens of thousands of professionals with AI skills through OpenAI Certifications. The initiative represents the most extensive professional upskilling programme powered by OpenAI.

A new flagship AI client programme will combine OpenAI’s enterprise products with Accenture’s deep industry expertise. The programme will help clients adopt AI in key functions like customer service, finance, HR and supply chain, automating workflows and improving decision-making.

The collaboration will leverage OpenAI’s AgentKit and other advanced tools to design, test and deploy custom AI agents rapidly. By integrating agentic AI, Accenture aims to accelerate enterprise reinvention and create measurable economic value for its clients.

Accenture and OpenAI have already worked with many of the world’s largest enterprises, including Walmart, Salesforce, PayPal and Morgan Stanley. The partnership enhances both firms’ global AI adoption and helps organisations unlock new growth opportunities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Most German researchers now use AI

Around 80 percent of researchers at Germany’s Max Planck and Fraunhofer societies report using AI in their work, according to a survey of more than 6,200 respondents published in Research Policy.

Nearly half said they were very familiar with AI tools, while another 44 percent had used AI a few times.

The study shows a rapid rise in AI use since 2023, when just 17 percent of researchers used generative AI weekly. Many respondents now employ AI for core and creative research tasks, with 37 percent citing its use in innovative work processes.

Demographic trends reveal that older researchers and women are less likely to use AI, although lower familiarity rather than scepticism drives these differences. Researchers view AI as transformative, acting increasingly as a co-creator or manager rather than merely an automation tool.

Measures such as training, supportive learning environments and legal guidance could further boost AI adoption. Despite the study being limited to Germany, the findings point to a profound transformation in research driven by AI technologies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europol backs major takedown of Cryptomixer in Switzerland

Europol has supported a coordinated action week in Zurich, where Swiss and German authorities dismantled the illegal cryptocurrency mixing service Cryptomixer.

Three servers were seized in Switzerland, together with the cryptomixer.io domain, leading to the confiscation of more than €25 million in Bitcoin and over 12 terabytes of operational data.

Cryptomixer operated on both the clear web and the dark web, enabling cybercriminals to conceal the origins of illicit funds. The platform has mixed over €1.3 billion in Bitcoin since 2016, aiding ransomware groups, dark web markets, and criminals involved in drug trafficking, weapons trafficking, and credit card fraud.

Its randomised pooling system effectively blocked the traceability of funds across the blockchain.

Mixing services, such as Cryptomixer, are used to anonymise illegal funds before moving them to exchanges or converting them into other cryptocurrencies or fiat. The takedown halts further laundering and disrupts a key tool used by organised cybercrime networks.

Europol facilitated information exchange through the Joint Cybercrime Action Taskforce and coordinated operational meetings throughout the investigation. The agency deployed cybercrime specialists on the final day to provide on-site support and forensics.

Earlier efforts included support for the 2023 takedown of Chipmixer, then the largest mixer of its kind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Fake AI product photos spark concerns for online retailers

Chinese shoppers are increasingly using AI to create fake product photos to claim refunds, raising moral and legal concerns. The practice was highlighted during the Double 11 festival, with sellers receiving images of allegedly damaged goods.

Some buyers manipulated photos of fruit to appear mouldy or altered images of electric toothbrushes to look rusty. Clothing and ceramic product sellers also detected AI-generated inconsistencies, such as unnatural lighting, distorted edges, or visible signs of manipulation.

In some cases, requests were withdrawn after sellers asked for video evidence.

E-commerce platforms have historically favoured buyers, granting refunds even when claims seem unreasonable. In response, major platforms such as Taobao and Tmall removed the ‘refund only’ option and introduced buyer credit ratings based on purchase and refund histories.

Sellers are also increasingly turning to AI tools to verify images.

China’s AI content rules, effective from 1 September, require AI-generated material to be labelled, but detection remains difficult. Legal experts warn that using AI to claim refunds could constitute fraud, with calls for stricter enforcement to prevent abuse.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cairo Forum examines MENA’s path in the AI era

The Second Cairo Forum brought together experts to assess how AI, global shifts, and economic pressures are shaping MENA. Speakers said the region faces a critical moment as new technologies accelerate. The discussion asked whether MENA will help shape AI or simply adopt it.

Participants highlighted global divides, warning that data misuse and concentrated control remain major risks. They argued that middle-income countries can collaborate to build shared standards. Several speakers urged innovation-friendly regulation supported by clear safety rules.

Officials from Egypt outlined national efforts to embed AI across health, agriculture, and justice. They described progress through applied projects and new governance structures. Limited data access and talent retention were identified as continuing obstacles.

Industry voices stressed that trust, transparency, and skills must underpin the use of AI. They emphasised co-creation that fits regional languages and contexts. Training and governance frameworks were seen as essential for responsible deployment.

Closing remarks warned that rapid advances demand urgent decisions. Speakers said safety investment lags behind development, and global competition is intensifying. They agreed that today’s choices will shape the region’s AI future.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!