Smarter interconnects become essential for AI processors

AI workloads are placing unprecedented strain on system on chip interconnects. Designers face complexity that exceeds the limits of traditional manual engineering approaches.

Semiconductor engineers are increasingly turning to automated network on chip design. Algorithms now generate interconnect topologies optimised for bandwidth, latency, power and area.

Physically aware automation reduces wirelengths, congestion and timing failures. Industry specialists report dramatically shorter design cycles and more predictable performance outcomes.

As AI spreads from data centres to edge devices, interconnect automation is becoming essential. The shift enables smaller teams to deliver powerful, energy efficient processors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Sadiq Khan voices strong concerns over AI job impact

London Mayor Sir Sadiq Khan has warned that AI could become a ‘weapon of mass destruction of jobs‘ if its impact is not managed correctly. He said urgent action is needed to prevent large-scale unemployment.

Speaking at Mansion House in the UK capital, Khan said London is particularly exposed due to the concentration of finance, professional services, and creative industries. He described the potential impact on jobs as ‘colossal’.

Khan said AI could improve public services and help tackle challenges such as cancer care and climate change. At the same time, he warned that reckless use could increase inequality and concentrate wealth and power.

Polling by City Hall suggests more than half of London workers expect AI to affect their jobs within a year. Sadiq Khan said entry-level roles may disappear fastest, limiting opportunities for young people.

The mayor announced a new task force to assess how Londoners can be supported through the transition. His office will also commission free AI training for residents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI becomes the starting point for everyday online tasks

Consumers across the US are increasingly starting everyday digital tasks with AI, rather than search engines or individual apps, according to new research tracking changes in online behaviour.

Dedicated AI platforms are becoming the first place where intent is expressed, whether users are planning travel, comparing products, seeking advice on purchases and managing budgets.

Research shows more than 60% of US adults used a standalone AI platform last year, with younger generations especially likely to begin personal tasks through conversational tools rather than traditional search.

Businesses face growing pressure to adapt as AI reshapes how decisions begin, encouraging companies to rethink marketing, commerce and customer journeys around dialogue rather than clicks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Matthew McConaughey moves decisively to protect AI likeness rights

Oscar-winning actor Matthew McConaughey has trademarked his image and voice to protect them from unauthorised use by AI platforms. His lawyers say the move is intended to safeguard consent and attribution in an evolving digital environment.

Several clips, including his well-known catchphrase from Dazed and Confused, have been registered with the United States Patent and Trademark Office. Legal experts say it is the first time an actor has used trademark law to address potential AI misuse of their likeness.

McConaughey’s legal team said there is no evidence of his image being manipulated by AI so far. The trademarks are intended to act as a preventative measure against unauthorised copying or commercial use.

The actor said he wants to ensure any future use of his voice or appearance is approved. Lawyers also said the approach could help capture value created through licensed AI applications.

Concerns over deepfakes and synthetic media are growing across the entertainment industry. Other celebrities have faced unauthorised AI-generated content, prompting calls for stronger legal protections.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare acquires Human Native to build a fair AI content licensing model

San Francisco-based company Cloudflare has acquired Human Native, an AI data marketplace designed to connect content creators with AI developers seeking high-quality training and inference material.

A move that reflects growing pressure to establish clearer economic rules for how online content is used by AI systems.

The acquisition is intended to help creators and publishers decide whether to block AI access entirely, optimise material for machine use, or license content for payment instead of allowing uncontrolled scraping.

Cloudflare says the tools developed through Human Native will support transparent pricing and fair compensation across the AI supply chain.

Human Native, founded in 2024 and backed by UK-based investors, focuses on structuring original content so it can be discovered, accessed and purchased by AI developers through standardised channels.

The team includes researchers and engineers with experience across AI research, design platforms and financial media.

Cloudflare argues that access to reliable and ethically sourced data will shape long-term competition in AI. By integrating Human Native into its wider platform, the company aims to support a more sustainable internet economy that balances innovation with creator rights.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI users spend 40% of saved time fixing errors

A recent study from Workday reveals that 40% of the time saved by AI in the workplace is spent correcting errors, highlighting a growing productivity paradox. Frequent AI users are bearing the brunt, often double- or triple-checking outputs to ensure accuracy.

Despite widespread adoption- 87% of employees report using AI at least a few times per week, and 85% save one to seven hours weekly-much of that time is redirected to fixing low-quality results rather than achieving net gains in productivity.

The findings suggest that AI can increase workloads rather than streamline operations if not implemented carefully.

Experts argue that AI should enhance human work rather than replace it. Employees need tools that handle complex tasks reliably, allowing teams to focus on creativity, judgment, and strategic decision-making.

Upskilling staff to manage AI effectively is critical to realising sustainable productivity benefits.

The study also highlights the risk of organisations prioritising speed over quality. Many AI tools place trust and accuracy responsibilities on employees, creating hidden costs and risks for decision-making.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Ofcom probes AI companion chatbot over age checks

Ofcom has opened an investigation into Novi Ltd over age checks on its AI companion chatbot. The probe focuses on duties under the Online Safety Act.

Regulators will assess whether children can access pornographic content without effective age assurance. Sanctions could include substantial fines or business disruption measures under the UK’s Online Safety Bill.

In a separate case, Ofcom confirmed enforcement pressure led Snapchat to overhaul its illegal content risk assessment. Revised findings now require stronger protections for UK users.

Ofcom said accurate risk assessments underpin online safety regulation. Platforms must match safeguards to real world risks, particularly when AI and children are concerned.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan and ASEAN agree to boost AI collaboration

Japan and the Association of Southeast Asian Nations (ASEAN) have agreed to collaborate on developing new AI models and preparing related legislation. The cooperation was formalised in a joint statement at a digital ministers’ meeting in Hanoi on Thursday.

Proposed by Minister Hayashi, the initiative aims to boost regional AI capabilities amid US and Chinese competition. Japan emphasised its ongoing commitment to supporting ASEAN’s technological development.

The partnership follows last October’s Japan-ASEAN summit, where Prime Minister Takaichi called for joint research in semiconductors and AI. The agreement aims to foster closer innovation ties and regional collaboration in strategic technology sectors.

The collaboration will engage public and private stakeholders to promote research, knowledge exchange, and capacity-building across ASEAN. Officials expect the partnership to speed AI adoption while maintaining regional regulations and ethical standards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe highlights legal frameworks for AI fairness

The Council of Europe recently hosted an online event to examine the challenges posed by algorithmic discrimination and explore ways to strengthen governance frameworks for AI and automated decision-making (ADM) systems.

Two new publications were presented, focusing on legal protections against algorithmic bias and policy guidelines for equality bodies and human rights institutions.

Algorithmic bias has been shown to exacerbate existing social inequalities. In employment, AI systems trained on historical data may unfairly favour male candidates or disadvantage minority groups.

Public authorities also use AI in law enforcement, migration, welfare, justice, education, and healthcare, where profiling, facial recognition, and other automated tools can carry discriminatory risks. Private-sector applications in banking, insurance, and personnel services similarly raise concerns.

Legal frameworks such as the EU AI Act (2024/1689) and the Council of Europe’s Framework Convention on AI, human rights, democracy, and the rule of law aim to mitigate these risks. The publications review how regulations protect against algorithmic discrimination and highlight remaining gaps.

National equality bodies and human rights structures play a key role in monitoring AI/ADM systems, ensuring compliance, and promoting human rights-based deployment.

The webinar highlighted practical guidance and examples for applying EU and Council of Europe rules to public sector AI initiatives, fostering more equitable and accountable systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

How Switzerland can shape AI in 2026

Switzerland is heading into 2026 facing an AI transition marked by uncertainty, and it may not win a raw ‘compute race’ dominated by the biggest hardware buyers. In his blog ‘10 Swiss values and practices for AI & digitalisation in 2026,’ Jovan Kurbalija argues that Switzerland’s best response is to build resilience around an ‘AI Trinity’ of Zurich’s entrepreneurship, Geneva’s governance, and communal subsidiarity, using long-standing Swiss practices as a practical compass rather than a slogan.

A central idea is subsidiarity. When top-down approaches hit limits, Switzerland can push ‘bottom-up AI’ grounded in local knowledge and real community needs. Kurbalija points to practical steps such as turning libraries, post offices, and community centres into AI knowledge hubs, creating apprenticeship-style AI programmes, and small grants that help communities develop local AI tools. He also cites a proposal for a ‘Geneva stack’ of sovereign digital tools adopted across public institutions, alongside the notion of a decentralised ‘cyber militia’ capacity for defence.

The blog also leans heavily on entrepreneurship and innovation, especially Switzerland’s SME culture and Zurich’s tech ecosystem. The message for 2026 is to strengthen partnerships between Swiss startups and major global tech firms present in the region, while also connecting more actively with fast-growing digital economy actors from places like India and Singapore.

Instead of chasing moonshots alone, Kurbalija says Switzerland can double down on ‘precision AI’ in areas such as medtech, fintech, and cleantech, and expand its move toward open-source AI tools across the full lifecycle, from models to localised agents.

Another theme is trust and quality, and the challenge of translating Switzerland’s high-trust reputation into the AI era. Beyond cybersecurity, the question is whether Switzerland can help define ‘trustworthy AI,’ potentially even as an international verifier certifying systems.

At the same time, Kurbalija frames quality as a Swiss competitive edge in a world frustrated with low-grade ‘AI slop,’ arguing that better outcomes often depend less on new algorithms and more on well-curated knowledge and data.

He also flags neutrality and sovereignty as issues that will move from abstract debates to urgent policy questions, such as what neutrality means when cyber weapons and AI systems are involved, and how much control a country can realistically keep over data and infrastructure in an interdependent world. He notes that digital sovereignty is a key priority in Switzerland’s 2026 digital strategy, with a likely focus on mapping where critical digital assets are stored and on protecting sensitive domains, such as health, elections, and security, while running local systems when feasible.

Finally, the blog stresses solidarity and resilience as the social and infrastructural foundations of the transition. As AI-driven centralisation risks widening divides, Kurbalija calls for reskilling, support for regions and industries in transition, and digital tools that strengthen social safety nets rather than weaken them.

His bottom line is that Switzerland can’t, and shouldn’t, try to outspend others on hardware. Still, it can choose whether to ‘import the future as a dependency’ or build it as a durable capability, carefully and inclusively, on unmistakably Swiss strengths.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot