Swiss city deepens crypto adoption as 350 businesses now accept Bitcoin

The Swiss city of Lugano has advanced one of Europe’s most ambitious crypto-adoption programmes, with more than 350 shops and restaurants now accepting Bitcoin for everyday purchases, alongside municipal services such as pre-school childcare.

The city has distributed crypto-payment terminals free to local merchants, part of its Plan B initiative, launched in partnership with Tether to position Lugano as a European bitcoin hub.

Merchants cite lower transaction fees compared to credit cards, though adoption remains limited in practice. City officials and advocates envision a future ‘circular economy,’ where residents earn and spend bitcoin locally.

Early real-world tests suggest residents can conduct most daily purchases in Bitcoin, though gaps remain in public transport, fuel and utilities.

Lugano’s strategy comes as other national or city-level cryptocurrency initiatives have struggled. El Salvador’s experiment with making Bitcoin legal tender has seen minimal uptake, while cities such as Ljubljana and Zurich have been more successful in encouraging crypto-friendly ecosystems.

Analysts and academics warn that Lugano faces significant risks, including bitcoin’s volatility, reputational exposure linked to illicit use, and vulnerabilities tied to custodial digital wallets.

Switzerland’s deposit-guarantee protections do not extend to crypto assets, which raises concerns about consumer protection. The mayor, however, dismisses fears of criminal finance, arguing that cash remains far more attractive for illicit transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian families receive eSafety support as the social media age limit takes effect

Australia has introduced a minimum age requirement of 16 for social media accounts during the week, marking a significant shift in its online safety framework.

The eSafety Commissioner has begun monitoring compliance, offering a protective buffer for young people as they develop digital skills and resilience. Platforms now face stricter oversight, with potential penalties for systemic breaches, and age assurance requirements for both new and current users.

Authorities stress that the new age rule forms part of a broader effort aimed at promoting safer online environments, rather than relying on isolated interventions. Australia’s online safety programmes continue to combine regulation, education and industry engagement.

Families and educators are encouraged to utilise the resources on the eSafety website, which now features information hubs that explain the changes, how age assurance works, and what young people can expect during the transition.

Regional and rural communities in Australia are receiving targeted support, acknowledging that the change may affect them more sharply due to limited local services and higher reliance on online platforms.

Tailored guidance, conversation prompts, and step-by-step materials have been produced in partnership with national mental health organisations.

Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts.

eSafety officials underline that the new limit introduces a delay rather than a ban. The aim is to reduce exposure to persuasive design and potential harm while encouraging stronger digital literacy, emotional resilience and critical thinking.

Ongoing webinars and on-demand sessions provide additional support as the enforcement phase progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK partners with DeepMind to boost AI innovation

The UK Department for Science, Innovation and Technology (DSIT) has entered a strategic partnership with Google DeepMind to advance AI across public services, research, and security.

The non-legally binding memorandum of understanding outlines a shared commitment to responsible AI development, while enhancing national readiness for transformative technologies.

The collaboration will explore AI solutions for public services, including education, government departments, and the Incubator for AI (i.AI). Google DeepMind may provide engineering support and develop AI tools, including a government-focused version of Gemini aligned with the national curriculum.

Researchers will gain priority access to DeepMind’s AI models, including AlphaEvolve, AlphaGenome, and WeatherNext, with joint initiatives supporting automated R&D and lab facilities in the UK. The partnership seeks to accelerate innovation in strategically important areas such as fusion energy.

AI security will be strengthened through the UK AI Security Institute, which will share model insights, address emerging risks, and enhance national cyber preparedness. The MoU is voluntary, spans 36 months, and ensures compliance with data privacy laws, including UK GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Vietnam passes first AI law to strict safeguards

Vietnam’s National Assembly has passed its first AI Law, advancing the regulation and development of AI nationwide. The legislation was approved with overwhelming support, alongside amendments to the Intellectual Property Law and a revised High Technology Law.

The AI Law will take effect on March 1, 2026.

The law establishes core principles, prohibits certain acts, and outlines a risk management framework for AI systems. The law combines safeguards for high-risk AI with incentives for innovation, including sandbox testing, a National AI Development Fund, and startup vouchers.

AI oversight will be centralised under the Government, led by the Ministry of Science and Technology, with assessments needed only for high-risk systems approved by the Prime Minister. The law allows real-time updates to this list to keep pace with technological advances.

Flexible provisions prevent obsolescence by avoiding fixed technology lists or rigid risk classifications. Lawmakers emphasised the balance between regulation and innovation, aiming to create a safe yet supportive environment for AI growth in Vietnam.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU advances ambitious gigafactory programme for AI leadership

The Council has agreed on a significant amendment to the EuroHPC Joint Undertaking regulation, aiming to establish AI gigafactories across Europe alongside a new quantum pillar.

The plan advances earlier efforts to build AI factories and redirects unused EU funds toward larger and more ambitious facilities. Up to five gigafactories are expected, supported through public and private partnerships that promise a stronger technological base for European research and industry.

AI gigafactories will combine high-performance computing, energy-efficient data centres and automated systems to give Europe world-class AI capacity. The regulation sets out firm rules for funding and procurement while protecting start-ups and scale-ups.

It also allows gigafactories to be spread across multiple countries, creating a flexible model that can strengthen European resilience, competitiveness and security instead of relying heavily on American or Chinese infrastructure.

An agreement that updates the governance of EuroHPC and introduces safeguards for participation from partners outside the EU. Quantum research and innovation activities will move from Horizon Europe to EuroHPC in order to consolidate work on critical technologies.

In a shift that aims to widen the impact of supercomputing and quantum infrastructure while supporting the development of essential skills for science and industry.

The next stage involves the European Parliament delivering its opinion on 17 December.

A final Council adoption will follow once legal and linguistic checks have been completed, marking a decisive step towards Europe’s new AI and quantum capability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents redefine knowledge work through cognitive collaboration

A new study by Perplexity and Harvard researchers sheds light on how people use AI agents at scale.

Millions of anonymised interactions were analysed to understand who relies on agent technology, how intensively it is used and what tasks users delegate. The findings challenge the notion of a digital concierge model and reveal a shift toward more profound cognitive collaboration, rather than merely outsourcing tasks.

More than half of all activity involves cognitive work, with strong emphasis on productivity, learning and research. Users depend on agents to scan documents, summarise complex material and prepare early analysis before making final decisions.

Students use AI agents to navigate coursework, while professionals rely on them to process information or filter financial data. The pattern suggests that users adopt agents to elevate their own capability instead of avoiding effort.

Usage also evolves. Early queries often involve low-pressure tasks, yet long-term behaviour moves sharply toward productivity and sustained research. Retention rates are highest among users working on structured workflows or tasks that require knowledge.

The trajectory mirrors the early personal computer, which gained value through spreadsheets and text processing rather than recreational use.

Six main occupations now drive most agent activity, with firm reliance among digital specialists as well as marketing, management and entrepreneurial roles. Context shapes behaviour, as finance users concentrate on efficiency while students favour research.

Designers and hospitality staff follow patterns linked to their professional needs. The study argues that knowledge work is increasingly shaped by the ability to ask better questions and that hybrid intelligence will define future productivity.

The pace of adaptation across the broader economy remains an open question.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Global network strengthens AI measurement and evaluation

Leaders around the world have committed to strengthening the scientific measurement and evaluation of AI following a recent meeting in San Diego.

Representatives from major economies agreed to intensify collaboration under the newly renamed International Network for Advanced AI Measurement, Evaluation and Science.

The UK has assumed the role of Network Coordinator, guiding efforts to create rigorous, globally recognised methods for assessing advanced AI systems.

A network that includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK and the US, promoting shared understanding and consistent evaluation practices.

Since its formation in November 2024, the Network has fostered knowledge exchange to align countries on AI measurement and evaluation best practices. Boosting public trust in AI remains central, unlocking innovations, new jobs, and opportunities for businesses and innovators to expand.

The recent San Diego discussions coincided with NeurIPS, allowing government, academic and industry stakeholders to collaborate more deeply.

AI Minister Kanishka Narayan highlighted the importance of trust as a foundation for progress, while Adam Beaumont, Interim Director of the AI Security Institute, stressed the need for global approaches to testing advanced AI.

The Network aims to provide practical and rigorous evaluation tools to ensure the safe development and deployment of AI worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online data exposure heightens threats to healthcare workers

Healthcare workers are facing escalating levels of workplace violence, with more than three-quarters reporting verbal or physical assaults, prompting hospitals to reassess how they protect staff from both on-site and external threats.

A new study examining people search sites suggests that online exposure of personal information may worsen these risks. Researchers analysed the digital footprint of hundreds of senior medical professionals, finding widespread availability of sensitive personal data.

The study shows that many doctors appear across multiple data broker platforms, with a significant share listed on five or more sites, making it difficult to track, manage, or remove personal information once it enters the public domain.

Exposure varies by age and geography. Younger doctors tend to have smaller digital footprints, while older professionals are more exposed due to accumulated public records. State-level transparency laws also appear to influence how widely data is shared.

Researchers warn that detailed profiles, often available for a small fee, can enable harassment or stalking at a time when threats against healthcare leaders are rising. The findings renew calls for stronger privacy protections for medical staff.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US War Department unveils AI-powered GenAI.mil for all personnel

The War Department has formally launched GenAI.mil, a bespoke generative AI platform powered initially by Gemini for Government, making frontier AI capabilities available to its approximately three million military, civilian, and contractor staff.

According to the department’s announcement, GenAI.mil supports so-called ‘intelligent agentic workflows’: users can summarise documents, generate risk assessments, draft policy or compliance material, analyse imagery or video, and automate routine tasks, all on a secure, IL5-certified platform designed for Controlled Unclassified Information (CUI).

The rollout, described as part of a broader push to cultivate an ‘AI-first’ workforce, follows a July directive from the administration calling for the United States to achieve ‘unprecedented levels of AI technological superiority.’

Department leaders said the platform marks a significant shift in how the US military operates, embedding AI into daily workflows and positioning AI as a force multiplier.

Access is limited to users with a valid DoW common-access card, and the service is currently restricted to non-classified work. The department also says the first rollout is just the beginning; additional AI models from other providers will be added later.

From a tech-governance and defence-policy perspective, this represents one of the most sweeping deployments of generative AI in a national security organisation to date.

It raises critical questions about security, oversight and the balance between efficiency and risk, especially if future iterations expand into classified or operational planning contexts.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI launches Agentic AI Foundation with industry partners

The US AI company, OpenAI, has co-founded the Agentic AI Foundation (AAIF) under the Linux Foundation alongside Anthropic, Block, Google, Microsoft, AWS, Bloomberg, and Cloudflare.

A foundation that aims to provide neutral stewardship for open, interoperable agentic AI infrastructure as systems move from experimental prototypes into real-world applications.

The initiative includes the donation of OpenAI’s AGENTS.md, a lightweight Markdown file designed to provide agents with project-specific instructions and context.

Since its release in August 2025, AGENTS.md has been adopted by more than 60,000 open-source projects, ensuring consistent behaviour across diverse repositories and frameworks. Contributions from Anthropic and Block will include the Model Context Protocol and the goose project, respectively.

By establishing AAIF, the co-founders intend to prevent ecosystem fragmentation and foster safe, portable, and interoperable agentic AI systems.

The foundation provides a shared platform for development, governance, and extension of open standards, with oversight by the Linux Foundation to guarantee neutral, long-term stewardship.

OpenAI emphasises that the foundation will support developers, enterprises, and the wider open-source community, inviting contributors to help shape agentic AI standards.

The AAIF reflects a collaborative effort to advance agentic AI transparently and in the public interest while promoting innovation across tools and platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!