Swiss Federal Council approves update to tax information exchange rules

The Swiss Federal Council has approved significant updates to the Ordinance on the International Automatic Exchange of Information in Tax Matters. The new rules are set to take effect across Switzerland on 1 January 2026, assuming no referendum intervenes.

The revisions expand Switzerland’s international exchange of financial account information, updating the Common Reporting Standard (CRS) and introducing the new Crypto-Asset Reporting Framework (CARF).

Crypto service providers in Switzerland will now have reporting, due diligence, and registration obligations under the AEOI Ordinance, although these provisions will not apply until at least 2027.

The updated Ordinance also extends CRS rules to Swiss associations and foundations while excluding certain accounts if specific conditions are met. Transitional measures aim to facilitate the implementation of the amended CRS and CARF by affected parties more smoothly.

Deliberations on partner states for Switzerland’s crypto data exchange have been paused by the National Council’s Economic Affairs and Taxation Committee. The CARF will become law in Switzerland in 2026, but full implementation is delayed, keeping crypto-asset rules inactive for the first year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

What the Cloudflare outage taught us: Tracing ones that shaped the internet of today

The internet has become part of almost everything we do. It helps us work, stay in touch with friends and family, buy things, plan trips, and handle tasks that would have felt impossible until recently. Most people cannot imagine getting through the day without it.

But there is a hidden cost to all this convenience. Most of the time, online services run smoothly, with countless systems working together in the background. But every now and then, though, a key cog slips out of place.

When that happens, the effects can spread fast, taking down apps, websites, and even entire industries within minutes. These moments remind us how much we rely on digital services, and how quickly everything can unravel when something goes wrong. It raises an uncomfortable question. Is digital dependence worth the convenience, or are we building a house of cards that could collapse, pulling us back into reality?

Warning shots of the dot-com Era and the infancy of Cloud services

In its early years, the internet saw several major malfunctions that disrupted key online services. Incidents like the Morris worm in 1988, which crashed about 10 percent of all internet-connected systems, and the 1996 AOL outage that left six million users offline, revealed how unprepared the early infrastructure was for growing digital demand.

A decade later, the weaknesses were still clear. In 2007, Skype, then with over 270 million users, went down for nearly two days after a surge in logins triggered by a Windows update overwhelmed its network. Since video calls were still in their early days, the impact was not as severe, and most users simply waited it out, postponing chats with friends and family until the issue was fixed.

As the dot-com era faded and the 2010s began, the shift to cloud computing introduced a new kind of fragility. When Amazon’s EC2 and EBS systems in the US-East region went down in 2011, the outage took down services like Reddit, Quora, and IMDb for days, exposing how quickly failures in shared infrastructure can cascade.

A year later, GoDaddy’s DNS failure took millions of websites offline, while large-scale Gmail disruptions affected users around the world, early signs that the cloud’s growing influence came with increasingly high stakes.

By the mid-2010s, it was clear that the internet had evolved from a patchwork of standalone services to a heavily interconnected ecosystem. When cloud or DNS providers stumbled, their failures rippled simultaneously across countless platforms. The move to centralised infrastructure made development faster and more accessible, but it also marked the beginning of an era where a single glitch could shake the entire web.

Centralised infrastructure and the age of cascading failures

The late 2000s and early 2010s saw a rapid rise in internet use, with nearly 2 billion people worldwide online. As access grew, more businesses moved into the digital space, offering e-commerce, social platforms, and new forms of online entertainment to a quickly expanding audience.

With so much activity shifting online, the foundation beneath these services became increasingly important, and increasingly centralised, setting the stage for outages that could ripple far beyond a single website or app.

The next major hit came in 2016, when a massive DDoS attack crippled major websites across the USA and Europe. Platforms like Netflix, Reddit, Twitter, and CNN were suddenly unreachable, not because they were directly targeted, but because Dyn, a major DNS provider, had been overwhelmed.

The attack used the Mirai botnet malware to hijack hundreds of thousands of insecure IoT devices and flood Dyn’s servers with traffic. It was one of the clearest demonstrations yet that knocking out a single infrastructure provider could take down major parts of the internet in one stroke.

In 2017, another major outage occurred, with Amazon at the centre once again. On 28 February, the company’s Simple Storage Service (S3) went down for about 4 hours, disrupting access across a large part of the US-EAST-1 region. While investigating a slowdown in the billing system, an Amazon engineer accidentally entered a typo in a command, taking more servers offline than intended.

That small error was enough to knock out services like Slack, Quora, Coursera, Expedia and countless other websites that relied on S3 for storage or media delivery. The financial impact was substantial; S&P 500 companies alone were estimated to have lost roughly 150 million dollars during the outage.

Amazon quickly published a clear explanation and apology, but transparency could not undo the economic damage nor (yet another) sudden reminder that a single mistake in a centralised system could ripple across the entire web.

Outages in the roaring 2020s

The S3 incident made one thing clear. Outages were no longer just about a single platform going dark. As more services leaned on shared infrastructure, even small missteps could take down enormous parts of the internet. And this fragility did not stop at cloud storage.

Over the next few years, attention shifted to another layer of the online ecosystem: content delivery networks and edge providers that most people had never heard of but that nearly every website depended on.

The 2020s opened with one of the most memorable outages to date. On 4 October 2021, Facebook and its sister platforms, Instagram, WhatsApp, and Messenger, vanished from the internet for nearly 7 hours after a faulty BGP configuration effectively removed the company’s services from the global routing table.

Millions of users flocked to other platforms to vent their frustration, overwhelming Twitter, Telegram, Discord, and Signal’s servers and causing performance issues across the board. It was a rare moment when a single company’s outage sent measurable shockwaves across the entire social media ecosystem.

But what happens when outages hit industries far more essential than social media? In 2023, the Federal Aviation Administration was forced to delay more than 10,000 flights, the first nationwide grounding of air traffic since the aftermath of September 11.

A corrupted database file brought the agency’s Notice to Air Missions (NOTAM) system to a standstill, leaving pilots without critical safety updates and forcing the entire aviation network to pause. The incident sent airline stocks dipping and dealt another blow to public confidence, showing just how disruptive a single technical failure can be when it strikes at the heart of critical infrastructure.

Outages that defined 2025

The year 2025 saw an unprecedented wave of outages, with server overloads, software glitches and coding errors disrupting services across the globe. The Microsoft 365 suite outage in January, the Southwest Airlines and FAA synchronisation failure in April, and the Meta messaging blackout in July all stood out for their scale and impact.

But the most disruptive failures were still to come. In October, Amazon Web Services suffered a major outage in its US-East-1 region, knocking out everything from social apps to banking services and reminding the world that a fault in a single cloud region can ripple across thousands of platforms.

Just weeks later, the Cloudflare November outage became the defining digital breakdown of the year. A logic bug inside its bot management system triggered a cascading collapse that took down social networks, AI tools, gaming platforms, transit systems and countless everyday websites in minutes. It was the clearest sign yet that when core infrastructure falters, the impact is immediate, global and largely unavoidable.

And yet, we continue to place more weight on these shared foundations, trusting they will hold because they usually do. Every outage, whether caused by a typo, a corrupted file, or a misconfigured update, exposes how quickly things can fall apart when one key piece gives way.

Going forward, resilience needs to matter as much as innovation. That means reducing single points of failure, improving transparency, and designing systems that can fail without dragging everything down. The more clearly we see the fragility of the digital ecosystem, the better equipped we are to strengthen it.

Outages will keep happening, and no amount of engineering can promise perfect uptime. But acknowledging the cracks is the first step toward reinforcing what we’ve built — and making sure the next slipped cog does not bring the whole machine to a stop.

The smoke and mirrors of the digital infrastructure

The internet is far from destined to collapse, but resilience can no longer be an afterthought. Redundancy, decentralisation and smarter oversight need to be part of the discussion, not just for engineers, but for policymakers as well.

Outages do not just interrupt our routines. They reveal the systems we have quietly built our lives around. Each failure shows how deeply intertwined our digital world has become, and how fast everything can stop when a single piece gives way.

Will we learn enough from each one to build a digital ecosystem that can absorb the next shock instead of amplifying it? Only time will tell.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Copilot will be removed from WhatsApp on 15 January 2026

Microsoft will withdraw Copilot from WhatsApp as of 15 January 2026, following the implementation of new platform rules that ban all LLM chatbots.

The service helped millions of users interact with their AI companion inside an everyday messaging environment, yet the updated policy leaves no option for continued support.

Copilot access will continue on the mobile app, the web portal and Windows, offering fuller functionality instead of the limited experience available on WhatsApp.

Users are encouraged to rely on these platforms for ongoing features such as Copilot Voice, Vision and Mico, which expand everyday use across a broader set of tasks.

Chat history cannot be transferred because WhatsApp operated the service without authentication; therefore, users must manually export their conversations before the deadline. Copilot remains free across supported platforms, although some advanced features require a subscription.

Microsoft is working to ensure a smooth transition and stresses that users can expect a more capable experience after leaving WhatsApp, as development resources now focus on its dedicated environments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Agentic AI transforms enterprise workflows in 2026

Enterprise AI entered a new phase as organisations transitioned from simple, prompt-driven tools to autonomous agents capable to acting within complex workflows.

Leaders now face a reality where agentic systems can accelerate development, improve decision-making, and support employees, yet concerns over unreliable data and inconsistent behaviour still weaken trust.

AI adoption has risen sharply, although many remain cautious about committing fully without stronger safeguards in place.

The next stage will rely on multi-agent models where an orchestrator coordinates specialised agents across departments. Single agents will lose effectiveness if they fail to offer scalable value, as enterprises require communication protocols, unified context, and robust governance.

Agents will increasingly pursue outcomes rather than follow instructions. At the same time, event-driven automation will allow them to detect problems, initiate analysis, and collaborate with other agents without waiting for human prompts. Simulation environments will further accelerate learning and strengthen reliability.

Trusted AI will become a defining competitive factor. Brands will be judged by the quality, personalisation, and relational intelligence of their agents rather than traditional identity markers.

Effective interfaces, transparent governance, and clear metrics for agent adherence will shape customer loyalty and shareholder confidence.

Cybersecurity will shift toward autonomous, self-healing digital immune systems, while advances in spatially aware AI will accelerate robotics and immersive simulations across various industries.

Broader impacts will reshape workplace culture. AI-native engineers will shorten development cycles, while non-technical employees will create personal applications, rather than relying solely on central teams.

Ambient intelligence may push new hardware into the mainstream, and sustainability debates will increasingly focus on water usage in data-intensive AI systems. Governments are preparing to upskill public workforces, and consumer agents will pressure companies to offer better value.

Long-term success will depend on raising AI literacy and selecting platforms designed for scalable, integrated, and agentic operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU pushes for stronger powers in delayed customs reform

EU lawmakers have accused national governments of stalling a major customs overhaul aimed at tackling the rise in low-cost parcels from China. Parliament’s lead negotiator Dirk Gotink argues that only stronger EU-level powers can help authorities regain control of soaring e-commerce volumes.

Talks have slowed over a proposed e-commerce data hub linking national customs services. Parliament wants European prosecutors to gain direct access to the hub, while capitals insist that national authorities must remain the gatekeepers to sensitive information.

Gotink warns that limiting access would undermine efforts to stop non-compliant goods such as those from China, entering the single market. Senior MEP Anna Cavazzini echoes the concern, saying EU-level oversight is essential to keep consumers safer and improve coordination across borders.

The Danish Council Presidency aims to conclude negotiations in mid-December but concedes that major disputes remain. Trade groups urge a swift deal, arguing that a modernised customs system must support enforcement against surging online imports.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oakley Meta glasses launch in India with AI features

Meta is preparing to introduce its Oakley Meta HSTN smart glasses to the Indian market as part of a new effort to bring AI-powered eyewear to a broader audience.

A launch that begins on 1 December and places the glasses within a growing category of performance-focused devices aimed at athletes and everyday users who want AI built directly into their gear.

The frame includes an integrated camera for hands-free capture and open-ear speakers that provide audio cues without blocking outside sound.

These glasses are designed to suit outdoor environments, offering IPX4 water resistance and robust battery performance. Also, they can record high-quality 3K video, while Meta AI supplies information, guidance and real-time support.

Users can expect up to eight hours of active use and a rapid recharge, with a dedicated case providing an additional forty-eight hours of battery life.

Meta has focused on accessibility by enabling full Hindi language support through the Meta AI app, allowing users to interact in their preferred language instead of relying on English.

The company is also testing UPI Lite payments through a simple voice command that connects directly to WhatsApp-linked bank accounts.

A ‘Hey Meta’ prompt enables hands-free assistance for questions, recording, or information retrieval, allowing users to remain focused on their activity.

The new lineup arrives in six frame and lens combinations, all of which are compatible with prescription lenses. Meta is also introducing its Celebrity AI Voice feature in India, with Deepika Padukone’s English AI voice among the first options.

Pre-orders are open on Sunglass Hut, with broader availability planned across major eyewear retailers at a starting price of ₹ 41,800.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AWS commits $50bn to US government AI

Amazon Web Services plans to invest $50 billion in high performance AI infrastructure dedicated to US federal agencies. The programme aims to broaden access to AWS tools such as SageMaker AI, Bedrock and model customisation services, alongside support for Anthropic’s Claude.

The expansion will add around 1.3 gigawatts of compute capacity, enabling agencies to run larger models and speed up complex workloads. AWS expects construction of the new data centres to begin in 2026, marking one of its most ambitious government-focused buildouts to date.

Chief executive Matt Garman argues the upgrade will remove long-standing technology barriers within government. The company says enhanced AI capabilities could accelerate work in areas ranging from cybersecurity to medical research while strengthening national leadership in advanced computing.

AWS has spent more than a decade developing secure environments for classified and sensitive government operations. Competitors have also stepped up US public sector offerings, with OpenAI, Anthropic and Google all rolling out heavily discounted AI products for federal use over the past year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA powers a new wave of specialised AI agents to transform business

Agentic AI has entered a new phase as companies rely on specialised systems instead of broad, one-size-fits-all models.

Open-source foundations, such as NVIDIA’s Neuron family, now allow organisations to combine internal knowledge with tailored architectures, leading to agents that understand the precise demands of each workflow.

Firms across cybersecurity, payments and semiconductor engineering are beginning to treat specialisation as the route to genuine operational value.

CrowdStrike is utilising Nemotron and NVIDIA NIM microservices to enhance its Agentic Security Platform, which supports teams by handling high-volume tasks such as alert triage and remediation.

Accuracy has risen from 80 to 98.5 percent, reducing manual effort tenfold and helping analysts manage complex threats with greater speed.

PayPal has taken a similar path by building commerce-focused agents that enable conversational shopping and payments, cutting latency nearly in half while maintaining the precision required across its global network of customers and merchants.

Synopsys is deploying agentic AI throughout chip design workflows by pairing open models with NVIDIA’s accelerated infrastructure. Early trials in formal verification show productivity improvements of 72 percent, offering engineers a faster route to identifying design errors.

The company is blending fine-tuned models with tools such as the NeMo Agent Toolkit and Blueprints to embed agentic support at every stage of development.

Across industries, strategic steps are becoming clear. Organisations begin by evaluating open models before curating and securing domain-specific data and then building agents capable of acting on proprietary information.

Continuous refinement through a data flywheel strengthens long-term performance.

NVIDIA aims to support the shift by promoting Nemotron, NeMo and its broader software ecosystem as the foundation for the next generation of specialised enterprise agents.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT unveils new shopping research experience

Since yesterday, ChatGPT has introduced a more comprehensive approach to product discovery with a new shopping research feature, designed to simplify complex purchasing decisions.

Users describe what they need instead of sifting through countless sites, and the system generates personalised buyer guides based on high-quality sources. The feature adapts to each user by asking targeted questions and reflecting previously stored preferences in memory.

The experience has been built with a specialised version of GPT-5 mini trained for shopping tasks through reinforcement learning. It gathers fresh information such as prices, specifications, and availability by reading reliable retail pages directly.

Users can refine the process in real-time by marking products as unsuitable or requesting similar alternatives, enabling a more precise result.

The tool is available on all ChatGPT plans and offers expanded usage during the holiday period. OpenAI emphasises that no chats are shared with retailers and that search results are sourced from public data sources, rather than sponsored content.

Some errors may still occur in product details, yet the intention is to develop a more intuitive and personalised way to navigate an increasingly crowded digital marketplace.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Japan boosts Rapidus with major semiconductor funding

Japan will inject more than one trillion yen (approximately 5.5 billion €) into chipmaker Rapidus between 2026 and 2027. The plan aims to fortify national economic security by rebuilding domestic semiconductor capacity after decades of reliance on overseas suppliers.

Rapidus intends to begin producing 2-nanometre chips in late 2027 as global demand for faster, AI-ready components surges. The firm expects overall investment to reach seven trillion yen and hopes to list publicly around 2031.

Japanese government support includes large subsidies and direct investment that add to earlier multi-year commitments. Private contributors, including Toyota and Sony, previously backed the venture, which was founded in 2022 to revive Japan’s cutting-edge chip ambitions.

Officials argue that advanced production is vital for technological competitiveness and future resilience. Critics to this investment note that there are steep costs and high risks, yet policymakers view the Rapidus investment as crucial to keeping pace with technological advancements.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot