Matthew McConaughey moves decisively to protect AI likeness rights

Oscar-winning actor Matthew McConaughey has trademarked his image and voice to protect them from unauthorised use by AI platforms. His lawyers say the move is intended to safeguard consent and attribution in an evolving digital environment.

Several clips, including his well-known catchphrase from Dazed and Confused, have been registered with the United States Patent and Trademark Office. Legal experts say it is the first time an actor has used trademark law to address potential AI misuse of their likeness.

McConaughey’s legal team said there is no evidence of his image being manipulated by AI so far. The trademarks are intended to act as a preventative measure against unauthorised copying or commercial use.

The actor said he wants to ensure any future use of his voice or appearance is approved. Lawyers also said the approach could help capture value created through licensed AI applications.

Concerns over deepfakes and synthetic media are growing across the entertainment industry. Other celebrities have faced unauthorised AI-generated content, prompting calls for stronger legal protections.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Saying ‘please’ to ChatGPT doesn’t change its energy footprint

Claims that removing polite words from ChatGPT prompts could reduce environmental impact are misleading, experts say. Extra words add minimal processing demand compared with overall system energy use.

AI consumes power mainly because every query triggers new computation. Unlike stored digital content, each AI response requires a full processing cycle within large-scale data centres.

Those facilities rely on constant electricity, cooling and water supplies. Rising AI use is therefore increasing pressure on energy systems and local infrastructure worldwide.

Experts argue the real issue lies in how AI infrastructure is planned and regulated. Focusing on prompt wording distracts from managing AI’s long-term environmental footprint.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI hoax targets Kate Garraway and family

Presenter Kate Garraway has condemned a cruel AI-generated hoax that falsely showed her with a new boyfriend. The images appeared online shortly after the death of her husband, Derek Draper.

Fake images circulated mainly on Facebook through impersonation accounts using her name and likeness. Members of the public and even friends mistakenly believed the relationship was real.

The situation escalated when fabricated news sites began publishing false stories involving her teenage son Billy. Garraway described the experience as deeply hurtful during an already raw period.

Her comments followed renewed scrutiny of AI image tools and platform responsibility. Recent restrictions aim to limit harmful and misleading content generated using artificial intelligence.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ofcom probes AI companion chatbot over age checks

Ofcom has opened an investigation into Novi Ltd over age checks on its AI companion chatbot. The probe focuses on duties under the Online Safety Act.

Regulators will assess whether children can access pornographic content without effective age assurance. Sanctions could include substantial fines or business disruption measures under the UK’s Online Safety Bill.

In a separate case, Ofcom confirmed enforcement pressure led Snapchat to overhaul its illegal content risk assessment. Revised findings now require stronger protections for UK users.

Ofcom said accurate risk assessments underpin online safety regulation. Platforms must match safeguards to real world risks, particularly when AI and children are concerned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brazil excluded from WhatsApp rival AI chatbot ban

WhatsApp has excluded Brazil from its new restriction on third-party general-purpose chatbots, allowing AI providers to continue operating on the platform despite a broader policy shift affecting other markets.

The decision follows action by the competition authority of Brazil, which ordered Meta to suspend elements of the policy while assessing whether the rules unfairly disadvantage rival chatbot providers in favour of Meta AI.

Developers have been informed that services linked to Brazilian phone numbers do not need to stop responding to users or issue service warnings.

Elsewhere, WhatsApp has introduced a 90-day grace period starting in mid-January, requiring chatbot developers to halt responses and notify users that services will no longer function on the app.

The policy applies to tools such as ChatGPT and Grok, while customer service bots used by businesses remain unaffected.

Italy has already secured a similar exemption after regulatory scrutiny, while the EU has opened an antitrust investigation into the new rules.

Meta continues to argue that general-purpose AI chatbots place technical strain on systems designed for business messaging instead of acting as an open distribution platform for AI services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How Switzerland can shape AI in 2026

Switzerland is heading into 2026 facing an AI transition marked by uncertainty, and it may not win a raw ‘compute race’ dominated by the biggest hardware buyers. In his blog ‘10 Swiss values and practices for AI & digitalisation in 2026,’ Jovan Kurbalija argues that Switzerland’s best response is to build resilience around an ‘AI Trinity’ of Zurich’s entrepreneurship, Geneva’s governance, and communal subsidiarity, using long-standing Swiss practices as a practical compass rather than a slogan.

A central idea is subsidiarity. When top-down approaches hit limits, Switzerland can push ‘bottom-up AI’ grounded in local knowledge and real community needs. Kurbalija points to practical steps such as turning libraries, post offices, and community centres into AI knowledge hubs, creating apprenticeship-style AI programmes, and small grants that help communities develop local AI tools. He also cites a proposal for a ‘Geneva stack’ of sovereign digital tools adopted across public institutions, alongside the notion of a decentralised ‘cyber militia’ capacity for defence.

The blog also leans heavily on entrepreneurship and innovation, especially Switzerland’s SME culture and Zurich’s tech ecosystem. The message for 2026 is to strengthen partnerships between Swiss startups and major global tech firms present in the region, while also connecting more actively with fast-growing digital economy actors from places like India and Singapore.

Instead of chasing moonshots alone, Kurbalija says Switzerland can double down on ‘precision AI’ in areas such as medtech, fintech, and cleantech, and expand its move toward open-source AI tools across the full lifecycle, from models to localised agents.

Another theme is trust and quality, and the challenge of translating Switzerland’s high-trust reputation into the AI era. Beyond cybersecurity, the question is whether Switzerland can help define ‘trustworthy AI,’ potentially even as an international verifier certifying systems.

At the same time, Kurbalija frames quality as a Swiss competitive edge in a world frustrated with low-grade ‘AI slop,’ arguing that better outcomes often depend less on new algorithms and more on well-curated knowledge and data.

He also flags neutrality and sovereignty as issues that will move from abstract debates to urgent policy questions, such as what neutrality means when cyber weapons and AI systems are involved, and how much control a country can realistically keep over data and infrastructure in an interdependent world. He notes that digital sovereignty is a key priority in Switzerland’s 2026 digital strategy, with a likely focus on mapping where critical digital assets are stored and on protecting sensitive domains, such as health, elections, and security, while running local systems when feasible.

Finally, the blog stresses solidarity and resilience as the social and infrastructural foundations of the transition. As AI-driven centralisation risks widening divides, Kurbalija calls for reskilling, support for regions and industries in transition, and digital tools that strengthen social safety nets rather than weaken them.

His bottom line is that Switzerland can’t, and shouldn’t, try to outspend others on hardware. Still, it can choose whether to ‘import the future as a dependency’ or build it as a durable capability, carefully and inclusively, on unmistakably Swiss strengths.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Canada turns to AI to parse feedback on federal AI strategy consultation

Canada’s Innovation, Science and Economic Development (ISED) department saw an overwhelming volume of comments on its national AI strategy consultation, prompting officials to use AI tools to analyse and organise responses from citizens, organisations and stakeholders.

The consultation was part of a broader effort to shape Canada’s approach to AI governance, regulation and adoption, with the government seeking input on how to balance innovation, competitiveness and responsible AI development.

Analysts and advocates have highlighted Canadians’ demand for strong oversight, transparency, and protections related to privacy and data protection, misinformation and ethical uses of AI.

Public interest groups have urged that rights, privacy and sustainability be central pillars of the AI strategy rather than secondary considerations, and recommended risk-based, people-centred regulations similar to frameworks adopted in other jurisdictions.

The use of AI to process feedback illustrates both the scale of engagement and the government’s willingness to employ the very technology it seeks to govern in drafting its policy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU lawmakers push limits on AI nudity apps

More than 50 EU lawmakers have called on the European Commission to clarify whether AI-powered applications for nudity are prohibited under existing EU legislation, citing concerns about online harm and legal uncertainty.

The request follows public scrutiny of the Grok, owned by xAI, which was found to generate manipulated intimate images involving women and minors.

Lawmakers argue that such systems enable gender-based online violence and the production of child sexual abuse material instead of legitimate creative uses.

In their letter, lawmakers questioned whether current provisions under the EU AI Act sufficiently address nudification tools or whether additional prohibitions are required. They also warned that enforcement focused only on substantial online platforms risks leaving similar applications operating elsewhere.

While EU authorities have taken steps under the Digital Services Act to assess platform responsibilities, lawmakers stressed the need for broader regulatory clarity and consistent application across the digital market.

Further political debate on the issue is expected in the coming days.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New TranslateGemma models support 55 languages efficiently

A new suite of open translation models, TranslateGemma, has been launched, bringing advanced multilingual capabilities to users worldwide. Built on the Gemma 3 architecture, the models support 55 languages and come in 4B, 12B, and 27B parameter sizes.

The release aims to make high-quality translation accessible across devices without compromising efficiency.

TranslateGemma delivers impressive performance gains, with the 12B model surpassing the 27B Gemma 3 baseline on WMT24++ benchmarks. The models achieve higher accuracy while requiring fewer parameters, enabling faster translations with lower latency.

The 4B model also performs on par with larger models, making it ideal for mobile deployment.

The development combines supervised fine-tuning on diverse parallel datasets with reinforcement learning guided by advanced metrics. TranslateGemma performs well in high- and low-resource languages and supports accurate text translation within images.

Designed for flexible deployment, the models cater to mobile devices, consumer laptops, and cloud environments. Researchers and developers can use TranslateGemma to build customised translation solutions and improve coverage for low-resource languages.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot