Trump signs order blocking individual US states from enforcing AI rules

US President Donald Trump has signed an executive order aimed at preventing individual US states from enforcing their own AI regulations, arguing that AI oversight should be handled at the federal level. Speaking at the White House, Trump said a single national framework would avoid fragmented rules, while his AI adviser, David Sacks, added that the administration would push back against what it views as overly burdensome state laws, except for measures focused on child safety.

The move is welcomed by major technology companies, which have long warned that a patchwork of state-level regulations could slow innovation and weaken the US position in the global AI race, particularly in comparison to China. Industry groups say a unified national approach would provide clarity for companies investing billions of dollars in AI development and help maintain US leadership in the sector.

However, the executive order has sparked strong backlash from several states, most notably California. Governor Gavin Newsom criticised the decision as an attempt to undermine state protections, pointing to California’s own AI law that requires large developers to address potential risks posed by their models.

Other states, including New York and Colorado, have also enacted AI regulations, arguing that state action is necessary in the absence of comprehensive federal safeguards.

Critics warn that blocking state laws could leave consumers exposed if federal rules are weak or slow to emerge, while some legal experts caution that a national framework will only be effective if it offers meaningful protections. Despite these concerns, tech lobby groups have praised the order and expressed readiness to work with the White House and Congress to establish nationwide AI standards.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Reddit challenges Australia’s teen social media ban

The US social media company, Reddit, has launched legal action in Australia as the country enforces the world’s first mandatory minimum age for social media access.

Reddit argues that banning users under 16 prevents younger Australians from taking part in political debate, instead of empowering them to learn how to navigate public discussion.

Lawyers representing the company argue that the rule undermines the implied freedom of political communication and could restrict future voters from understanding the issues that will shape national elections.

Australia’s ban took effect on December 10 and requires major platforms to block underage users or face penalties that can reach nearly 50 million Australian dollars.

Companies are relying on age inference and age estimation technologies to meet the obligation, although many have warned that the policy raises privacy concerns in addition to limiting online expression.

The government maintains that the law is designed to reduce harm for younger users and has confirmed that the list of prohibited platforms may expand as new safety issues emerge.

Reddit’s filing names the Commonwealth of Australia and Communications Minister Anika Wells. The minister’s office says the government intends to defend the law and will prioritise the protection of young Australians, rather than allowing open access to high-risk platforms.

The platform’s challenge follows another case brought by an internet rights group that claims the legislation represents an unfair restriction on free speech.

A separate list identifies services that remain open for younger users, such as Roblox, Pinterest and YouTube Kids. At the same time, platforms including Instagram, TikTok, Snapchat, Reddit and X are blocked for those under sixteen.

The case is expected to shape future digital access rights in Australia, as online communities become increasingly central to political education and civic engagement among emerging voters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

India moves toward mandatory AI royalty regime

India is weighing a sweeping copyright framework that would require AI companies to pay royalties for training on copyrighted works under a mandatory blanket licence branded as the hybrid ‘One Nation, One Licence, One Payment’ model.

A new Copyright Royalties Collective for AI Training, or CRCAT, would collect payments from developers and distribute money to creators. AI firms would have to rely only on lawfully accessed material and file detailed summaries of training datasets, including data types and sources.

The panel is expected to favour flat, revenue-linked percentages on global earnings from commercial AI systems, reviewed roughly every three years and open to legal challenge in court.

Obligations would apply retroactively to AI developers that have already trained profitable models on copyright-protected material, framed by Indian policymakers as a corrective measure for the creative ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Swiss city deepens crypto adoption as 350 businesses now accept Bitcoin

The Swiss city of Lugano has advanced one of Europe’s most ambitious crypto-adoption programmes, with more than 350 shops and restaurants now accepting Bitcoin for everyday purchases, alongside municipal services such as pre-school childcare.

The city has distributed crypto-payment terminals free to local merchants, part of its Plan B initiative, launched in partnership with Tether to position Lugano as a European bitcoin hub.

Merchants cite lower transaction fees compared to credit cards, though adoption remains limited in practice. City officials and advocates envision a future ‘circular economy,’ where residents earn and spend bitcoin locally.

Early real-world tests suggest residents can conduct most daily purchases in Bitcoin, though gaps remain in public transport, fuel and utilities.

Lugano’s strategy comes as other national or city-level cryptocurrency initiatives have struggled. El Salvador’s experiment with making Bitcoin legal tender has seen minimal uptake, while cities such as Ljubljana and Zurich have been more successful in encouraging crypto-friendly ecosystems.

Analysts and academics warn that Lugano faces significant risks, including bitcoin’s volatility, reputational exposure linked to illicit use, and vulnerabilities tied to custodial digital wallets.

Switzerland’s deposit-guarantee protections do not extend to crypto assets, which raises concerns about consumer protection. The mayor, however, dismisses fears of criminal finance, arguing that cash remains far more attractive for illicit transactions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Norges Bank says digital krone not required for now

Norway’s central bank has concluded that a central bank digital currency is not needed for now, ending several years of research and reaffirming that the country’s existing payment system remains secure, efficient, and widely used.

Norges Bank stated that it found no current requirement for a digital krone to maintain confidence in payments. Cash usage in Norway is among the lowest globally, but authorities argue the present system continues to serve consumers, merchants, and banks effectively.

The decision is not final. Governor Ida Wolden Bache said the assessment reflects timing rather than a rejection of CBDCs, noting the bank could introduce one if conditions change or if new risks emerge in the domestic payments landscape.

Norges Bank continues to examine both retail and wholesale models under the broader EU AI Act framework for digital resilience. It also sees potential in tokenisation, which could deliver efficiency gains and lower settlement risk even if a full CBDC is not introduced.

Experiments with tokenised platforms will continue in collaboration with industry partners. At the same time, the bank prepares a new report for early next year and monitors international work on shared digital currency infrastructure, including a possible digital €.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Australian families receive eSafety support as the social media age limit takes effect

Australia has introduced a minimum age requirement of 16 for social media accounts during the week, marking a significant shift in its online safety framework.

The eSafety Commissioner has begun monitoring compliance, offering a protective buffer for young people as they develop digital skills and resilience. Platforms now face stricter oversight, with potential penalties for systemic breaches, and age assurance requirements for both new and current users.

Authorities stress that the new age rule forms part of a broader effort aimed at promoting safer online environments, rather than relying on isolated interventions. Australia’s online safety programmes continue to combine regulation, education and industry engagement.

Families and educators are encouraged to utilise the resources on the eSafety website, which now features information hubs that explain the changes, how age assurance works, and what young people can expect during the transition.

Regional and rural communities in Australia are receiving targeted support, acknowledging that the change may affect them more sharply due to limited local services and higher reliance on online platforms.

Tailored guidance, conversation prompts, and step-by-step materials have been produced in partnership with national mental health organisations.

Young people are reminded that they retain access to group messaging tools, gaming services and video conferencing apps while they await eligibility for full social media accounts.

eSafety officials underline that the new limit introduces a delay rather than a ban. The aim is to reduce exposure to persuasive design and potential harm while encouraging stronger digital literacy, emotional resilience and critical thinking.

Ongoing webinars and on-demand sessions provide additional support as the enforcement phase progresses.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents redefine knowledge work through cognitive collaboration

A new study by Perplexity and Harvard researchers sheds light on how people use AI agents at scale.

Millions of anonymised interactions were analysed to understand who relies on agent technology, how intensively it is used and what tasks users delegate. The findings challenge the notion of a digital concierge model and reveal a shift toward more profound cognitive collaboration, rather than merely outsourcing tasks.

More than half of all activity involves cognitive work, with strong emphasis on productivity, learning and research. Users depend on agents to scan documents, summarise complex material and prepare early analysis before making final decisions.

Students use AI agents to navigate coursework, while professionals rely on them to process information or filter financial data. The pattern suggests that users adopt agents to elevate their own capability instead of avoiding effort.

Usage also evolves. Early queries often involve low-pressure tasks, yet long-term behaviour moves sharply toward productivity and sustained research. Retention rates are highest among users working on structured workflows or tasks that require knowledge.

The trajectory mirrors the early personal computer, which gained value through spreadsheets and text processing rather than recreational use.

Six main occupations now drive most agent activity, with firm reliance among digital specialists as well as marketing, management and entrepreneurial roles. Context shapes behaviour, as finance users concentrate on efficiency while students favour research.

Designers and hospitality staff follow patterns linked to their professional needs. The study argues that knowledge work is increasingly shaped by the ability to ask better questions and that hybrid intelligence will define future productivity.

The pace of adaptation across the broader economy remains an open question.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

SwissBorg unveils Mastercard-powered crypto card

SwissBorg has formed a strategic partnership with Mastercard to launch the SwissBorg Card, a crypto debit card designed to facilitate everyday digital-asset spending.

Users can spend crypto at over 150 million Mastercard locations worldwide, making digital assets more practical for everyday use.

The card provides real-time crypto-to-fiat conversion via SwissBorg’s Meta-Exchange, which finds the best rates across centralised and decentralised platforms. Users can select a primary asset with backups, and transactions are settled in local currencies such as CHF, GBP, or EUR.

The programme introduces a cashback system that returns up to 90% of exchange-related fees in BORG, with rewards increasing as users progress through SwissBorg’s loyalty ranks. Additional benefits include boosted yields, airdrops, and priority access to selected investment opportunities.

The SwissBorg app lets users manage cards, reorder assets, freeze or block cards, and track conversions. The virtual version will launch in Q1 2026 across 30 countries, with physical cards and expanded features planned for subsequent releases.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

People trust doctors more than AI

New research shows that most people remain cautious about using ChatGPT for diagnoses but view AI more favourably when it supports cancer detection. The findings come from two nationally representative surveys presented at the Society for Risk Analysis annual meeting.

The study, led by researchers from USC and Baruch College, analysed trust and attitudes towards AI in medicine. Participants generally trusted human clinicians more, with only about one in six saying they trusted AI as much as a medical expert.

Individuals who had used AI tools such as ChatGPT tended to hold more positive attitudes, reporting greater understanding and enthusiasm for AI-assisted healthcare. Familiarity appeared to reduce hesitation and increase confidence in the technology.

When shown an AI system for early cervical cancer detection, respondents reported more excitement and potential than fear. The results suggest that concrete, real-world applications can help build trust in medical AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google revisits smart glasses market with AI-powered models

Google has announced plans to re-enter the smart-glasses market in 2026 with new AI-powered wearables, a decade after discontinuing its ill-fated Google Glass.

The company will introduce two models: one without a screen that provides AI assistance through voice and sensor interaction, and another with an integrated display. The glasses will integrate Google’s Gemini AI system.

The move comes as the sector experiences rapid growth. Meta has sold more than two million pairs of its Ray-Ban-built AI glasses, helping drive a 250% year-on-year surge in smart-glasses sales in early 2025.

Analysts say Google must avoid repeating the missteps of Google Glass, which suffered from privacy concerns, awkward design, and limited functionality before being withdrawn in 2015.

Google’s renewed effort benefits from advances in AI and more mature consumer expectations, but challenges remain. Privacy, data protection, and real-world usability issues, core concerns during Google Glass’s first iteration, are expected to resurface as AI wearables become more capable and pervasive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!