Lithuania selects Swiss firm Procivis for national eIDAS 2.0 wallet sandbox

Swiss firm Procivis has secured a contract to deliver Lithuania’s end-to-end Digital Identity Wallet sandbox, supporting the country’s preparations under eIDAS 2.0. The project will establish a national testbed for digital ID use cases and interoperability across the European Union.

Selected by Lithuania’s digitalisation agency, Procivis will build a platform for public authorities and relying parties to test secure digital wallet use cases. The sandbox will validate readiness ahead of the EU’s 2027 digital identity wallet deadline.

The updated eIDAS 2.0 technical framework sets out how wallets will store and share trusted digital credentials and electronic identification. Governments and private organisations will be able to integrate services into the wallets, streamlining authentication, onboarding, and cross-border access.

Across Lithuania and the EU, testbeds and large-scale pilots have been central to turning regulatory requirements into interoperable infrastructure. Lithuania’s sandbox will also support activities under the EU’s LSP Aptitude consortium, which is testing cross-sector digital identity solutions.

Procivis said the collaboration aims to accelerate practical validation while ensuring compliance with European standards on security, interoperability, and data protection. The company stated that supporting a timely, budget-aligned implementation of eIDAS 2.0 remains central to its mission.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Reddit tests AI shopping search

Reddit has begun testing an AI-powered shopping search tool with a limited group of users in the US. Search queries for product ideas now generate interactive carousels featuring prices, images and direct links to retailers.

Items appearing in the results are drawn from recommendations shared in posts and comments across the platform. Listings are connected to Reddit’s advertising and shopping partners, bringing community discussions closer to online purchasing.

Expansion into AI-led commerce builds on the company’s earlier launch of Dynamic Product Ads, designed to deliver personalised suggestions. Closer integration of search and shopping signals a broader effort to strengthen digital revenue streams.

Chief executive Steve Huffman recently described AI search as a significant business opportunity beyond product development alone. Weekly search users increased from 60 million to 80 million over the past year, while engagement with the AI-powered Reddit Answers tool rose sharply throughout 2025.

Developments place Reddit alongside other technology platforms investing in AI-driven retail features. Growing user engagement suggests the company sees search as central to its future commercial strategy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Chinese AI video tool unsettles Hollywood

A new AI video model developed by ByteDance has unsettled Hollywood after generating cinema-quality clips from brief text prompts. Seedance 2.0, launched in 2025, went viral for producing realistic action scenes featuring western cinematic characters such as Spider Man and Deadpool.

In response, major studios, including Disney and Paramount, issued cease and desist letters over alleged copyright infringement. Japan has also begun investigating ByteDance after AI-generated anime videos spread widely online.

Industry experts say Seedance 2.0 stands out for combining text, visuals and audio within a single system. Analysts in Singapore and Melbourne argue that Chinese AI models are now matching US competitors at the technological frontier.

As Seedance 2.0 gains traction, Beijing continues to prioritise AI and robotics in its economic strategy. The rise of tools from China has intensified debate in the US and beyond over copyright, regulation and the future of creative work.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google’s Lyria 3 advances generative AI music with transparency and copyright safeguards

Google has introduced Lyria 3 inside its Gemini app, marking its expansion into AI-generated music. The model enables users to create 30-second tracks from text prompts, images, or short videos. It also supports Dream Track on YouTube Shorts, strengthening AI integration in creator tools.

The development reflects the growing convergence of multimodal AI systems. Gemini can already generate text, images, and video, and music is now added to this ecosystem. This positions Google within the broader race to embed generative AI across digital content infrastructures.

Lyria 3 lowers technical barriers to music production. Users can generate instrumentals and lyrics without prior composition skills, simply by describing a mood, genre, or memory. This aligns with wider efforts to democratise creative expression through AI tools.

The model also introduces technical improvements over earlier audio systems. It offers greater control over tempo, vocals, and style, while producing more realistic and musically complex outputs. However, tracks are currently limited to 30 seconds, suggesting a phased rollout approach.

Transparency measures are embedded through SynthID watermarking technology. All AI-generated tracks include an imperceptible identifier to signal synthetic origin. Such mechanisms respond to increasing policy discussions on labelling and traceability of AI-generated content.

Google also emphasises safeguards related to intellectual property. The system is designed for original expression rather than direct imitation of specific artists. Prompts referencing known artists are treated as stylistic inspiration, and outputs are filtered against existing works, with reporting mechanisms available for potential rights violations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI productivity gap reveals critical enterprise adoption challenges

AI continues to generate expectations of broad economic transformation, particularly in productivity and employment. However, the extent of measurable economy-wide gains remains uncertain, and the overall impact of AI on business performance is still being assessed.

An extensive survey conducted by the National Bureau of Economic Research (NBER) found that while around 70% of firms across the US, UK, Germany, and Australia report using AI, nearly 9 in 10 companies have seen no significant effect on productivity or employment over the past 3 years. The findings suggest a gap between adoption rates and tangible outcomes.

Current enterprise use of AI remains concentrated in specific functions, including text generation with large language models, visual content creation, and data processing. Although previous studies have identified productivity gains in targeted areas such as customer support and writing tasks, these improvements have not yet translated into broad organisational performance increases.

Despite limited results to date, business leaders expect AI to deliver modest productivity gains in the coming years. The survey highlights a divergence in expectations, with senior executives anticipating slight reductions in employment, while employees foresee small job growth linked to AI adoption.

At the same time, some technology leaders predict more immediate disruption. Microsoft AI leader has argued that AI could soon reach human-level performance in many professional tasks, potentially reshaping white-collar work within the next few years.

The survey also indicates limited engagement with AI tools among top executives, with many reporting minimal or no direct use of them. This suggests that while AI investment is widespread, its integration into day-to-day leadership practices remains uneven.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Brand turns AI demon into marketing stunt

Beverage company Liquid Death triggered confusion during the Winter Olympics after airing an AI advert featuring a figure skater who transforms into a red-eyed demon. The commercial appeared on Peacock’s Olympics stream but was not posted online, leaving viewers questioning whether it was real.

The brand later confirmed the advert was intentional and designed to parody fears around AI. According to Liquid Death, the limited run and lack of online acknowledgement were meant to amplify the sense of unease during the Winter Olympics broadcast.

Marketing analysts said that brands are increasingly leaning into AI scepticism to build trust with wary consumers. Campaigns from Equinox and Almond Breeze have similarly contrasted human authenticity with AI-generated content.

Despite the strategy, the Winter Olympics stunt drew criticism on social media, with some users labelling the advert AI slop. The reaction highlights both the risks and rewards for brands experimenting with AI-themed messaging.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Adoption of agentic AI slowed by data readiness and governance gaps

Agentic AI is emerging as a new stage of enterprise automation, enabling systems to reason, plan, and act across workflows. Adoption, however, remains uneven, with far fewer organisations scaling deployments beyond pilots.

Unlike traditional analytics or generative tools, agentic systems make decisions rather than simply producing insights. Without sufficient context, they struggle to align actions with real business conditions, revealing a persistent context gap.

Recent survey data highlights this disconnect. Although executives express confidence in AI ambitions, significant shares cite data readiness, infrastructure, and skills as barriers. Many identify AI as central to strategy, yet only a limited proportion tie deployments to measurable business outcomes.

Effective agentic AI depends on layered data foundations. Public data provides baseline capability, organisational data enables operational competence, and third-party context supports differentiation. Weak governance or integration can undermine autonomy at scale.

Enterprises that align data governance, enrichment, and AI oversight are more likely to scale beyond pilots. Progress depends less on model sophistication than on trusted data foundations that support transparency and measurable outcomes.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

MIT study finds AI chatbots underperform for vulnerable users

Research from the MIT Centre for Constructive Communication (CCC) finds that leading AI chatbots often provide lower-quality responses to users with lower English proficiency, less education, or who are outside the US.

Models tested include GPT-4, Claude 3 Opus, and Llama 3, which sometimes refuse to answer or respond condescendingly. Using TruthfulQA and SciQ datasets, researchers added user biographies to simulate differences in education, language, and country.

Accuracy fell sharply among non-native English speakers and less-educated users, with the most significant drop among those affected by both; users from countries like Iran also received lower-quality responses.

Refusal behaviour was notable. Claude 3 Opus declined 11% of questions for less-educated, non-native English speakers versus 3.6% for control users. Manual review showed 43.7% of refusals contained condescending language.

Some users were denied access to specific topics even though they answered correctly for others.

The study echoes human sociocognitive biases, in which non-native speakers are often perceived as less competent. Researchers warn AI personalisation could worsen inequities, providing marginalised users with subpar or misleading information when they need it most.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Gemini 3.1 Pro brings advanced logic to developers and consumers

Google has launched Gemini 3.1 Pro, an upgraded AI model for solving complex science, research, and engineering challenges. Following the Gemini 3 Deep Think release, the update adds enhanced core reasoning for consumer, developer, and enterprise applications.

Developers can access 3.1 Pro in preview via the Gemini API, Google AI Studio, Gemini CLI, Antigravity, and Android Studio, while enterprise users can use it through Vertex AI and Gemini Enterprise.

Consumers can now try the upgrade through the Gemini app and NotebookLM, with higher limits for Google AI Pro and Ultra plan users.

Benchmarks show significant improvements in logic and problem-solving. On the ARC-AGI-2 benchmark, 3.1 Pro scored 77.1%, more than doubling the reasoning performance of its predecessor.

The upgrade is intended to make AI reasoning more practical, offering tools to visualise complex topics, synthesise data, and enhance creative projects.

Feedback from Gemini 3 Pro users has driven the rapid development of 3.1 Pro. The preview release allows Google to validate improvements and continue refining advanced agentic workflows before the model becomes widely available.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Microsoft outlines challenges in verifying AI-generated media

In an era of deepfakes and AI-manipulated content, determining what is real online has become increasingly complex. Microsoft’s report Media Integrity and Authentication reviews current verification methods, their limits, and ways to boost trust in digital media.

The study emphasises that no single solution can prevent digital deception. Techniques such as provenance tracking, watermarking, and digital fingerprinting can provide useful context about a media file’s origin, creation tools, and whether it has been altered.

Microsoft has pioneered these technologies, cofounding the Coalition for Content Provenance and Authenticity (C2PA) to standardise media authentication globally.

The report also addresses the risks of sociotechnical attacks, where even subtle edits can manipulate authentication results to mislead the public.

Researchers explored how provenance information can remain durable and reliable across different environments, from high-security systems to offline devices, highlighting the challenge of maintaining consistent verification.

As AI-generated or edited content becomes commonplace, secure media provenance is increasingly important for news outlets, public figures, governments, and businesses.

Reliable provenance helps audiences spot manipulated content, with ongoing research guiding clearer, practical verification displays for the public.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot