Technological inventions blurring the line between reality and fiction

The rapid progress of AI over the past few years has unsettled the global population, reaching a point where it is extremely difficult to say with certainty whether certain content has been created by AI or not.

We are confronted with this phenomenon through photos, video and audio recordings that can easily confuse us and force us to question our perception of reality.

Digital twins are being used by scammers in the crypto space to impersonate influencers and execute fraudulent schemes.

And while the public often focuses on deepfakes, at the same time we are witnessing inventions and patents emerging around the world that deserve admiration, but also spark important reflection: are we nearing, or have we already crossed, the ethical red line?

For these and many other reasons, in a world where the visual and functional differences between science fiction and reality have almost disappeared, the latest inventions come as a shock.

We are now at a point where we are facing technologies that force us to redefine what we mean by the word ‘reality’.

Neuralink: Crossing the boundary between brain and machine

Amyotrophic lateral sclerosis (ALS) is a rare neurological disease caused by damage and degeneration of motor neurons—nerve cells in the brain and spinal cord. This damage disrupts the transmission of nerve impulses to muscles via peripheral nerves, leading to a progressive loss of muscle function.

However, the Neuralink chip, developed by Elon Musk’s company, has helped one patient type with their mind and speak using their voice. This breakthrough opens the door to a new form of communication where thoughts become direct interactions.

Liquid robot from South Korea

Scenes from sci-fi films are becoming reality, and in this case (thankfully), a liquid robot has a noble purpose—to assist in rescue missions and be applied in medicine.

Currently in the early prototype stage, it has been demonstrated in labs through a collaboration between MIT and Korean research institutes.

ULS exoskeleton as support for elderly care

Healthcare workers and caregivers in China have had their work greatly simplified thanks to the ULS Robotics exoskeleton, weighing only five kilograms but enabling users to lift up to 30 kilograms.

This represents a leap forward in caring for people with limited mobility, while also increasing safety and efficiency. Commercial prototypes have been tested in hospitals and industrial environments.

Agrorobots: Autonomous crop spraying

Another example from China that has been in use for several years. Robots equipped with AI perform precise crop spraying. The system analyses pests and targets them without the need for human presence, reducing potential health risks.

The application has become standardised, with expectations for further expansion and improvement in the near future.

The stretchable battery of the future

Researchers in Sweden have developed a flexible battery that can double in length without losing energy, making it ideal for wearable technologies.

Although not yet commercially available, it has been covered in scientific journals. The aim is for it to become a key component in bendable devices, smart clothing and medical implants.

Volonaut Airbike: A sci-fi vehicle takes off

When it comes to innovation, the Volonaut Airbike hits the mark perfectly. Designed to resemble a single-seat speeder bike from Star Wars, it represents a giant leap toward personal air travel.

Functional prototypes exist, but testing remains limited due to high production costs and regulatory hurdles related to traffic laws. Nevertheless, the Polish company behind it remains committed to this idea, and it will be exciting to follow its progress.

NEO robot: The humanoid household assistant

A Norwegian company has been developing a humanoid robot capable of performing household tasks, including gardening chores like collecting and bagging leaves or grass.

These are among the first serious steps toward domestic humanoid assistants. Currently functioning in demo mode, the robot has received backing from OpenAI.

Lenovo Yoga Solar: The laptop that loves sunlight

If you find yourself without a charger but with access to direct sunlight, this laptop will do everything it can to keep you powered. Using solar energy, 20 minutes of charging in sunlight provides around one hour of video playback.

Perfect for ecologists and digital nomads. Although not yet commercially available, it has been showcased at several major tech expos.

What comes next: The need for smart regulation

As technology races ahead, regulation must catch up. From neurotech to autonomous robots, each innovation raises new questions about privacy, accountability, and ethics.

Governments and tech developers alike must collaborate to ensure that these inventions remain tools for good, not risks to society.

So, what is real and what is generated?

This question will only become harder to answer as time goes on. But on the other hand, if the technological revolution continues to head in a useful and positive direction, perhaps there is little to fear.

The true dilemma in this era of rapid innovation may not be about the tools themselves, but about the fundamental question: Is technology shaping us, or do we still shape it?

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rewriting the AI playbook: How Meta plans to win through openness

Meta hosted its first-ever LlamaCon, a high-profile developer conference centred around its open-source language models. Timed to coincide with the release of its Q1 earnings, the event showcased Llama 4, Meta’s newest and most powerful open-weight model yet.

The message was clear – Meta wants to lead the next generation of AI on its own terms, and with an open-source edge. Beyond presentations, the conference represented an attempt to reframe Meta’s public image.

Once defined by social media and privacy controversies, Meta is positioning itself as a visionary AI infrastructure company. LlamaCon wasn’t just about a model. It was about a movement Meta wants to lead, with developers, startups, and enterprises as co-builders.

By holding LlamaCon the same week as its earnings call, Meta strategically emphasised that its AI ambitions are not side projects. They are central to the company’s identity, strategy, and investment priorities moving forward. This convergence of messaging signals a bold new chapter in Meta’s evolution.

The rise of Llama: From open-source curiosity to strategic priority

When Meta introduced LLaMA 1 in 2023, the AI community took notice of its open-weight release policy. Unlike OpenAI and Anthropic, Meta allowed researchers and developers to download, fine-tune, and deploy Llama models on their own infrastructure. That decision opened a floodgate of experimentation and grassroots innovation.

Now with Llama 4, the models have matured significantly, featuring better instruction tuning, multilingual capacity, and improved safety guardrails. Meta’s AI researchers have incorporated lessons learned from previous iterations and community feedback, making Llama 4 an update and a strategic inflexion point.

Crucially, Meta is no longer releasing Llama as a research novelty. It is now a platform and stable foundation for third-party tools, enterprise solutions, and Meta’s AI products. That is a turning point, where open-source ideology meets enterprise-grade execution.

Zuckerberg’s bet: AI as the engine of Meta’s next chapter

Mark Zuckerberg has rarely shied away from bold, long-term bets—whether it’s the pivot to mobile in the early 2010s or the more recent metaverse gamble. At LlamaCon, he clarified that AI is now the company’s top priority, surpassing even virtual reality in strategic importance.

He framed Meta as a ‘general-purpose AI company’, focused on both the consumer layer (via chatbots and assistants) and the foundational layer (models and infrastructure). Meta CEO envisions a world where Meta powers both the AI you talk to and the AI your apps are built on—a dual play that rivals Microsoft’s partnership with OpenAI.

This bet comes with risk. Investors are still sceptical about Meta’s ability to turn research breakthroughs into a commercial advantage. But Zuckerberg seems convinced that whoever controls the AI stack—hardware, models, and tooling—will control the next decade of innovation, and Meta intends to be one of those players.

A costly future: Meta’s massive AI infrastructure investment

Meta’s capital expenditure guidance for 2025—$60 to $65 billion—is among the largest in tech history. These funds will be spent primarily on AI training clusters, data centres, and next-gen chips.

That level of spending underscores Meta’s belief that scale is a competitive advantage in the LLM era. Bigger compute means faster training, better fine-tuning, and more responsive inference—especially for billion-parameter models like Llama 4 and beyond.

However, such an investment raises questions about whether Meta can recoup this spending in the short term. Will it build enterprise services, or rely solely on indirect value via engagement and ads? At this point, no monetisation plan is directly tied to Llama—only a vision and the infrastructure to support it.

Economic clouds: Revenue growth vs Wall Street’s expectations

Meta reported an 11% year-over-year increase in revenue in Q1 2025, driven by steady performance across its ad platforms. However, Wall Street reacted negatively, with the company’s stock falling nearly 13% following the earnings report, because investors are worried about the ballooning costs associated with Meta’s AI ambitions.

Despite revenue growth, Meta’s margins are thinning, mainly due to front-loaded investments in infrastructure and R&D. While Meta frames these as essential for long-term dominance in AI, investors are still anchored to short-term profit expectations.

A fundamental tension is at play here – Meta is acting like a venture-stage AI startup with moonshot spending, while being valued as a mature, cash-generating public company. Whether this tension resolves through growth or retrenchment remains to be seen.

Global headwinds: China, tariffs, and the shifting tech supply chain

Beyond internal financial pressures, Meta faces growing external challenges. Trade tensions between the US and China have disrupted the global supply chain for semiconductors, AI chips, and data centre components.

Meta’s international outlook is dimming with tariffs increasing and Chinese advertising revenue falling. That is particularly problematic because Meta’s AI infrastructure relies heavily on global suppliers and fabrication facilities. Any disruption in chip delivery, especially GPUs and custom silicon, could derail its training schedules and deployment timelines.

At the same time, Meta is trying to rebuild its hardware supply chain, including in-house chip design and alternative sourcing from regions like India and Southeast Asia. These moves are defensive but reflect how AI strategy is becoming inseparable from geopolitics.

Llama 4 in context: How it compares to GPT-4 and Gemini

Llama 4 represents a significant leap from Llama 2 and is now comparable to GPT-4 in a range of benchmarks. Early feedback suggests strong performance in logic, multilingual reasoning, and code generation.

However, how it handles tool use, memory, and advanced agentic tasks is still unclear. Compared to Gemini 1.5, Google’s flagship model, Llama 4 may still fall short in certain use cases, especially those requiring long context windows and deep integration with other Google services.

But Llama has one powerful advantage – it’s free to use, modify, and self-host. That makes Llama 4 a compelling option for developers and companies seeking control over their AI stack without paying per-token fees or exposing sensitive data to third parties.

Open source vs closed AI: Strategic gamble or masterstroke?

Meta’s open-weight philosophy differentiates it from rivals, whose models are mainly gated, API-bound, and proprietary. By contrast, Meta freely gives away its most valuable assets, such as weights, training details, and documentation.

Openness drives adoption. It creates ecosystems, accelerates tooling, and builds developer goodwill. Meta’s strategy is to win the AI competition not by charging rent, but by giving others the keys to build on its models. In doing so, it hopes to shape the direction of AI development globally.

Still, there are risks. Open weights can be misused, fine-tuned for malicious purposes, or leaked into products Meta doesn’t control. But Meta is betting that being everywhere is more powerful than being gated. And so far, that bet is paying off—at least in influence, if not yet in revenue.

Can Meta’s open strategy deliver long-term returns?

Meta’s LlamaCon wasn’t just a tech event but a philosophical declaration. In an era where AI power is increasingly concentrated and monetised, Meta chooses a different path based on openness, infrastructure, and community adoption.

The company invests tens of billions of dollars without a clear monetisation model. It is placing a massive bet that open models and proprietary infrastructure can become the dominant framework for AI development.

Meta is facing a major antitrust trial as the FTC argues its Instagram and WhatsApp acquisitions were made to eliminate competition rather than foster innovation.

Meta’s move positions it as the Android of the LLM era—ubiquitous, flexible, and impossible to ignore. The road ahead will be shaped by both technical breakthroughs and external forces—regulation, economics, and geopolitics.

Whether Meta’s open-source gamble proves visionary or reckless, one thing is clear – the AI landscape is no longer just about who has the most innovative model. It’s about who builds the broadest ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beyond the imitation game: GPT-4.5, the Turing Test, and what comes next

From GPT-4 to 4.5: What has changed and why it matters

In March 2024, OpenAI released GPT-4.5, the latest iteration in its series of large language models (LLMs), pushing the boundaries of what machines can do with language understanding and generation. Building on the strengths of GPT-4, its successor, GPT-4.5, demonstrates improved reasoning capabilities, a more nuanced understanding of context, and smoother, more human-like interactions.

What sets GPT-4.5 apart from its predecessors is that it showcases refined alignment techniques, better memory over longer conversations, and increased control over tone, persona, and factual accuracy. Its ability to maintain coherent, emotionally resonant exchanges over extended dialogue marks a turning point in human-AI communication. These improvements are not just technical — they significantly affect the way we work, communicate, and relate to intelligent systems.

The increasing ability of GPT-4.5 to mimic human behaviour has raised a key question: Can it really fool us into thinking it is one of us? That question has recently been answered — and it has everything to do with the Turing Test.

The Turing Test: Origins, purpose, and modern relevance

In 1950, British mathematician and computer scientist Alan Turing posed a provocative question: ‘Can machines think?’ In his seminal paper ‘Computing Machinery and Intelligence,’ he proposed what would later become known as the Turing Test — a practical way of evaluating a machine’s ability to exhibit intelligent behaviour indistinguishable from that of a human.

In its simplest form, if a human evaluator cannot reliably distinguish between a human’s and a machine’s responses during a conversation, the machine is said to have passed the test. For decades, the Turing Test remained more of a philosophical benchmark than a practical one.

Early chatbots like ELIZA in the 1960s created the illusion of intelligence, but their scripted and shallow interactions fell far short of genuine human-like communication. Many researchers have questioned the test’s relevance as AI progressed, arguing that mimicking conversation is not the same as true understanding or consciousness.

Despite these criticisms, the Turing Test has endured — not as a definitive measure of machine intelligence, but rather as a cultural milestone and public barometer of AI progress. Today, the test has regained prominence with the emergence of models like GPT-4.5, which can hold complex, context-aware, emotionally intelligent conversations. What once seemed like a distant hypothetical is now an active, measurable challenge that GPT-4.5 has, by many accounts, overcome.

How GPT-4.5 fooled the judges: Inside the Turing Test study

In early 2025, a groundbreaking study conducted by researchers at the University of California, San Diego, provided the most substantial evidence yet that an AI could pass the Turing Test. In a controlled experiment involving over 500 participants, multiple conversational agents—including GPT-4.5, Meta’s LLaMa-3.1, and the classic chatbot ELIZA—were evaluated in blind text-based conversations. The participants were tasked with identifying whether they spoke to a human or a machine.

The results were astonishing: GPT-4.5 was judged to be human in 54% to 73% of interactions, depending on the scenario, surpassing the baseline for passing the Turing Test. In some cases, it outperformed actual human participants—who were correctly identified as human only 67% of the time.

That experiment marked the first time a contemporary AI model convincingly passed the Turing Test under rigorous scientific conditions. The study not only demonstrated the model’s technical capabilities—it also raised philosophical and ethical questions.

What does it mean for a machine to be ‘indistinguishable’ from a human? And more importantly, how should society respond to a world where AI can convincingly impersonate us?

Measuring up: GPT-4.5 vs LLaMa-3.1 and ELIZA

While GPT-4.5’s performance in the Turing Test has garnered much attention, its comparison with other models puts things into a clearer perspective. Meta’s LLaMa-3.1, a powerful and widely respected open-source model, also participated in the study.

It was identified as human in approximately 56% of interactions — a strong showing, although it fell just short of the commonly accepted benchmark to define a Turing Test pass. The result highlights how subtle conversational nuance and coherence differences can significantly influence perception.

The study also revisited ELIZA, the pioneering chatbot from the 1960s designed to mimic a psychotherapist. While historically significant, ELIZA’s simplistic, rule-based structure resulted in it being identified as non-human in most cases — around 77%. That stark contrast with modern models demonstrates how far natural language processing has progressed over the past six decades.

The comparative results underscore an important point: success in human-AI interaction today depends on language generation and the ability to adapt the tone, context, and emotional resonance. GPT-4.5’s edge seems to come not from mere fluency but from its ability to emulate the subtle cues of human reasoning and expression — a quality that left many test participants second-guessing whether they were even talking to a machine.

The power of persona: How character shaped perception

One of the most intriguing aspects of the UC San Diego study was how assigning specific personas to AI models significantly influenced participants’ perceptions. When GPT-4.5 was framed as an introverted, geeky 19-year-old college student, it consistently scored higher in being perceived as human than when it had no defined personality.

The seemingly small narrative detail was a powerful psychological cue that shaped how people interpreted its responses. The use of persona added a layer of realism to the conversation.

Slight awkwardness, informal phrasing, or quirky responses were not seen as flaws — they were consistent with the character. Participants were more likely to forgive or overlook certain imperfections if those quirks aligned with the model’s ‘personality’.

That finding reveals how intertwined identity and believability are in human communication, even when the identity is entirely artificial. The strategy also echoes something long known in storytelling and branding: people respond to characters, not just content.

In the context of AI, persona functions as a kind of narrative camouflage — not necessarily to deceive, but to disarm. It helps bridge the uncanny valley by offering users a familiar social framework. And as AI continues to evolve, it is clear that shaping how a model is perceived may be just as important as what the model is actually saying.

Limitations of the Turing Test: Beyond the illusion of intelligence

While passing the Turing Test has long been viewed as a milestone in AI, many experts argue that it is not the definitive measure of machine intelligence. The test focuses on imitation — whether an AI can appear human in conversation — rather than on genuine understanding, reasoning, or consciousness. In that sense, it is more about performance than true cognitive capability.

Critics point out that large language models like GPT-4.5 do not ‘understand’ language in the human sense – they generate text by predicting the most statistically probable next word based on patterns in massive datasets. That allows them to generate impressively coherent responses, but it does not equate to comprehension, self-awareness, or independent thought.

No matter how convincing, the illusion of intelligence is still an illusion — and mistaking it for something more can lead to misplaced trust or overreliance. Despite its symbolic power, the Turing Test was never meant to be the final word on AI.

As AI systems grow increasingly sophisticated, new benchmarks are needed — ones that assess linguistic mimicry, reasoning, ethical decision-making, and robustness in real-world environments. Passing the Turing Test may grab headlines, but the real test of intelligence lies far beyond the ability to talk like us.

Wider implications: Rethinking the role of AI in society

GPT-4.5’s success in the Turing Test does not just mark a technical achievement — it forces us to confront deeper societal questions. If AI can convincingly pass as a human in open conversation, what does that mean for trust, communication, and authenticity in our digital lives?

From customer service bots to AI-generated news anchors, the line between human and machine is blurring — and the implications are far from purely academic. These developments are challenging existing norms in areas such as journalism, education, healthcare, and even online dating.

How do we ensure transparency when AI is involved? Should AI be required to disclose its identity in every interaction? And how do we guard against malicious uses — such as deepfake conversations or synthetic personas designed to manipulate, mislead, or exploit?

 Body Part, Hand, Person, Finger, Smoke Pipe

On a broader level, the emergence of human-sounding AI invites a rethinking of agency and responsibility. If a machine can persuade, sympathise, or influence like a person — who is accountable when things go wrong?

As AI becomes more integrated into the human experience, society must evolve its frameworks not only for regulation and ethics but also for cultural adaptation. GPT-4.5 may have passed the Turing Test, but the test for us, as a society, is just beginning.

What comes next: Human-machine dialogue in the post-Turing era

With GPT-4.5 crossing the Turing threshold, we are no longer asking whether machines can talk like us — we are now asking what that means for how we speak, think, and relate to machines. That moment represents a paradigm shift: from testing the machine’s ability to imitate humans to understanding how humans will adapt to coexist with machines that no longer feel entirely artificial.

Future AI models will likely push this boundary even further — engaging in conversations that are not only coherent but also deeply contextual, emotionally attuned, and morally responsive. The bar for what feels ‘human’ in digital interaction is rising rapidly, and with it comes the need for new social norms, protocols, and perhaps even new literacies.

We will need to learn not only how to talk to machines but how to live with them — as collaborators, counterparts, and, in some cases, as reflections of ourselves. In the post-Turing era, the test is no longer whether machines can fool us — it is whether we can maintain clarity, responsibility, and humanity in a world where the artificial feels increasingly real.

GPT-4.5 may have passed a historic milestone, but the real story is just beginning — not one of machines becoming human, but of humans redefining what it means to be ourselves in dialogue with them.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft at 50 – A journey through code, cloud, and AI

The start of a software empire

Microsoft, the American tech giant, was founded 50 years ago, on 4 April 1975, by Harvard dropout Bill Gates and his childhood friend Paul Allen. Since then, the company has evolved from a small startup into the world’s largest software company.

Its early success can be traced back to a pivotal deal in 1975 involving the Altair computer, which inspired the pair to launch the business officially.

That same drive for innovation would later secure Microsoft a breakthrough in 1980 when it partnered with IBM. A collaboration that was supplying the DOS operating system for IBM PCs, a move that turned Microsoft into a household name.

In 1986, Microsoft went public at $21 per share, according to the NASDAQ.  A year later, Gates popped up on the billionaire list, the youngest ever to hold the status at the time, at 31 years old.

Microsoft expands its empire

Throughout the 1980s and 1990s, Microsoft’s dominance in the software industry grew rapidly, particularly with the introduction of Windows 3.0 in 1990, which sold over 60 million copies and solidified the company’s control over the PC software market.

Microsoft, founded 50 years ago by Bill Gates and Paul Allen, evolved from a small startup to the world’s largest software company, revolutionising the tech landscape.

Over the decades, Microsoft has diversified its portfolio far beyond operating systems. Its Productivity and Business Processes division now includes the ever-popular Office Suite, which caters to both commercial and consumer markets, and the business-focused LinkedIn platform.

Equally significant is Microsoft’s Intelligent Cloud segment, led by its Azure Cloud Services, now the second-largest cloud platform globally, which has transformed the way businesses manage computing infrastructure.

The strategic pivot into cloud computing has been complemented by a range of other products, including SQL Server, Windows Server, and Visual Studio.

The giant under scrutiny

The company’s journey has not been without challenges. Its rapid rise in the 1990s attracted regulatory scrutiny, leading to high-profile antitrust cases and significant fines in both the USA and Europe.

Triggered by concerns over Microsoft’s growing dominance in the personal computer market, US regulators launched a series of investigations into whether the company was actively working to stifle competition.

The initial Federal Trade Commission probe was soon picked up by the Department of Justice, which filed formal charges in 1998. At the heart of the case was Microsoft’s practice of bundling its software, mainly Internet Explorer, with the Windows operating system.

 Flag, American Flag

Critics argued that this not only marginalised competitors like Netscape, but also made it difficult for users to install or even access alternative programs.

From Bill Gates to Satya Nadella

Despite these setbacks, Microsoft has continually adapted to the evolving technological landscape. When Steve Ballmer became CEO in 2000, some doubted his leadership, yet Microsoft maintained its stronghold in both business and personal computing.

In the early 2000s, the company overhauled its operating systems under the codename Project Longhorn.

The initiative led to the release of Windows Vista in 2007, which received mixed reactions. However, Windows 7 in 2009 helped Microsoft regain favour, while subsequent updates like Windows 8 and 8.1 aimed to modernise the user experience, especially on tablets.

The transition from Bill Gates to Steve Ballmer, and later to Satya Nadella in 2014, marked a new era of leadership that saw the company’s market capitalisation soar and its focus shift to cloud computing and AI.

A man in a suit and tie

Under Nadella’s stewardship, Microsoft has invested heavily in AI, including a notable $1 billion investment in OpenAI in 2019.

The strategic move, alongside the integration of AI features across its software ecosystem, from Microsoft 365 to Bing and Windows, signals the company’s determination to remain at the forefront of technological innovation.

Microsoft’s push for innovation through major acquisitions and investments

Microsoft has consistently demonstrated its commitment to expanding its technological capabilities and market reach through strategic acquisitions.

In 2011, Microsoft made headlines with its $8.5 billion acquisition of Skype, a move intended to rival Apple’s FaceTime and Google Voice by integrating Skype across Microsoft platforms like Outlook and Xbox.

 Airport, Terminal, Sign, Symbol, Airport Terminal, Text

Other strategic acquisitions played a significant role in Microsoft’s evolution. The company purchased LinkedIn, Skype, GitHub and Mojang, the studios behind Minecraft. In recent years, the company has made notable investments in key sectors, including cloud infrastructure, cybersecurity, ΑΙ, and gaming.

One of the most significant acquisitions was Inflection AI in 2024. This deal bolstered Microsoft’s efforts to integrate AI into everyday applications. Personal AI tools, essential for both consumers and businesses, enhance productivity and personalisation.

The acquisition strengthens Microsoft’s position in conversational AI, benefiting platforms such as Microsoft 365, Azure AI, and OpenAI’s ChatGPT, which Microsoft heavily supports.

By enhancing its capabilities in natural language processing and user interaction, this acquisition allows Microsoft to offer more intuitive and personalised AI solutions, helping it compete with companies like Google and Meta.

Microsoft acquires Fungible and Lumenisity for cloud innovation

In a strategic push to enhance its cloud infrastructure, Microsoft has made notable acquisitions in recent years, including Fungible and Lumenisity.

In January 2023, Microsoft acquired Fungible for $190 million. Fungible specialises in data processing units (DPUs), which are crucial for optimising tasks like network routing, security, and workload management.

By integrating Fungible’s technology, Microsoft enhances the operational efficiency of its Azure data centres, cutting costs and energy consumption while offering more cost-effective solutions to enterprise customers. This move positions Microsoft to capitalise on the growing demand for robust cloud services.

Similarly, in December 2022, Microsoft acquired Lumenisity, a company known for its advanced fibre optic technology. Lumenisity’s innovations boost network speed and efficiency, making it ideal for handling high volumes of data traffic.

azure

The move has strengthened Azure’s network infrastructure, improving data transfer speeds and reducing latency, particularly important for areas like the Internet of Things (IoT) and AI-driven workloads that require reliable, high-performance connectivity.

Together, these acquisitions reflect Microsoft’s ongoing commitment to innovation in cloud services and technology infrastructure.

Microsoft expands cybersecurity capabilities with Miburo acquisition

Microsoft has also announced its agreement to acquire Miburo, a leading expert in cyber intelligence and foreign threat analysis. This acquisition further strengthens Microsoft’s commitment to enhancing its cybersecurity solutions and threat detection capabilities.

Miburo, known for its expertise in identifying state-sponsored cyber threats and disinformation campaigns, will be integrated into Microsoft’s Customer Security and Trust organisation.

The acquisition will bolster Microsoft’s existing threat detection platforms, enabling the company to better address emerging cyber threats and state-sanctioned information operations.

Miburo’s analysts will work closely with Microsoft’s Threat Intelligence Center, data scientists, and other security teams to expand the company’s ability to counter complex cyber-attacks and the use of information operations by foreign actors.

 Sphere, Ball, Football, Soccer, Soccer Ball, Sport, Text, Photography

Miburo’s mission to protect democracies and ensure the integrity of information environments aligns closely with Microsoft’s goals of safeguarding its customers against malign influences and extremism.

A strategic move that further solidifies Microsoft’s position as a leader in cybersecurity and reinforces its ongoing investment in addressing evolving global security challenges.

Microsoft’s $68.7 billion Activision Blizzard acquisition boosts gaming and the metaverse

Perhaps the most ambitious acquisition in recent years was Activision Blizzard, which Microsoft acquired for $68.7 billion in 2022.

A close up of a device

With this purchase, Microsoft significantly expanded its presence in the gaming industry, integrating popular franchises like Call of Duty, World of Warcraft, and Candy Crush into its Xbox ecosystem.

The acquisition not only enhances Xbox’s competitiveness against Sony’s PlayStation but also positions Microsoft as a leader in the metaverse, using gaming as a gateway to immersive digital experiences.

This deal reflects the broader transformation in the gaming industry driven by cloud gaming, virtual reality, and blockchain technology.

A greener future: Microsoft’s sustainability goals

Another crucial element of the company’s business strategy is its dedication to sustainability, which will serve as the foundation of its operations and future objectives.

Microsoft has set ambitious targets to become carbon negative and water positive and achieve zero waste by 2030 while protecting ecosystems.

With a vast global presence spanning over 60 data centre regions, Microsoft leverages its cloud computing infrastructure to optimise both performance and sustainability.

The company’s approach focuses on integrating efficiency into every aspect of its infrastructure, from data centres to custom-built servers and silicon.

A key strategy in Microsoft’s sustainability efforts is its Power Purchase Agreements (PPAs), which aim to bring more carbon-free electricity to the grids where the company operates.

By securing over 34 gigawatts of renewable energy across 24 countries, Microsoft is not only advancing its own sustainability goals but also supporting the global transition to clean energy.

Microsoft plans major investment in AI infrastructure

Microsoft has also announced plans to invest $80 billion in building data centres designed to support AI workloads by the end of 2025. A significant portion of this investment, more than half, will be directed towards the USA.

As AI technology continues to grow, Microsoft’s spending includes billions on Nvidia graphics processing units (GPUs) to train AI models.

The rapid rise of OpenAI’s ChatGPT, launched in late 2022, has sparked a race among tech companies to develop their own generative AI models.

openai GPT

Having invested more than $13 billion in OpenAI, Microsoft has integrated its AI models into popular products such as Windows and Teams, while also expanding its cloud services through Azure.

Microsoft’s growth strategy shapes the future of tech innovation

All these acquisitions and investments reflect a cohesive strategy aimed at enhancing Microsoft’s leadership in key technology areas.

From AI and gaming to cybersecurity and cloud infrastructure, the company is positioning itself at the forefront of digital transformation. However, while these deals present significant growth opportunities, they also pose challenges.

Ensuring successful integration, managing regulatory scrutiny, and creating synergies between acquired entities will be key to Microsoft’s long-term success. In conclusion, Microsoft’s strategy highlights its dedication to innovation and technology leadership.

From its humble beginnings converting BASIC for Altair to its current status as a leader in cloud and AI, Microsoft’s story is one of constant reinvention and enduring influence in the digital age.

By diversifying across multiple sectors, including gaming, cloud computing, AI, and cybersecurity, the company is building a robust foundation for future growth.

A digital business model that not only reinforces Microsoft’s market position but also plays a vital role in shaping the future of technology.

For more information on these topics, visit diplomacy.edu.

Ghibli trend as proof of global dependence on AI: A phenomenon that overloaded social networks and systems

It is rare to find a person in this world (with internet access) who has not, at least once, consulted AI about some dilemma, idea, or a simple question.

The wide range of information and rapid response delivery has led humanity to embrace a ‘comfort zone’, allowing machines to reason for them, and recently, even to create animated photographs.

This brings us to a trend that, within just a few days, managed to spread across the planet through almost all meridians – the Ghibli style emerged spontaneously on social networks. When people realised they could obtain animated versions of their favourite photos within seconds, the entire network became overloaded.

 Art, Painting, Person, Computer, Computer Hardware, Computer Keyboard, Electronics, Hardware, Face, Head, Cartoon, Pc, Book, Publication, Yuriko Yamaguchi

Since there was no brake mechanism, reactions from leading figures were inevitable, with Sam Altman, CEO of OpenAI, speaking out.

He stated that the trend had surpassed all expectations and that servers were ‘strained’, making the Ghibli style available only to ChatGPT users subscribed to Plus, Pro, and Team versions.

Besides admiring AI’s incredible ability to create iconic moments within seconds, this phenomenon also raises the issue of global dependence on artificial intelligence.

Why are we all so in love with AI?

The answer to this question is rather simple, and here’s why. Imagine being able to finally transform your imagination into something visible and share all your creations with the world. It doesn’t sound bad, does it?

This is precisely where AI has made its breakthrough and changed the world forever. Just as Ghibli films have, for decades, inspired fans with their warmth and nostalgia, AI technology has created something akin to the digital equivalent of those emotions.

People are now creating and experiencing worlds that previously existed only in their minds. However, no matter how comforting it sounds, warnings are often raised about maintaining a sense of reality to avoid ‘falling into the clutches’ of a beautiful virtual world.

Balancing innovation and simplicity

Altman warned about the excessive use of AI tools, stating that even his employees are sometimes overwhelmed by the progress of artificial intelligence and the innovations it releases daily.

As a result, people are unable to adapt as quickly as AI, with information spreading faster than ever before.

However, there are also frequent cases of misuse, raising the question – where is the balance?

The culture of continuous production has led to saturation but also a lack of reflection. Perhaps this very situation will bring about the much-needed pause and encourage people to take a step back and ‘think more with their own heads’.

Ghibli is just one of many: How AI trends became mainstream

AI has been with us for a long time, but it was not as popular until major players like OpenAI, Gemini, Azure, and many others appeared. The Ghibli trend is just one of many that have become part of pop culture in recent years.

Since 2018, we have witnessed deepfake technologies, where various video clips, due to their ability to accurately recreate faces in entirely different contexts, flood social networks almost daily.

AI-generated music and audio recordings have also been among the most popular trends promoted over the past four years because they are ‘easy to use’ and offer users the feeling of creating quality content with just a few clicks.

There are many other trends that have captured the attention of the global public, such as the Avatar trend (Lensa AI), generated comics and stories (StoryAI and ComicGAN), while anime-style generators have actually existed since 2022 (Waifu Labs).

Are we really that lazy or just better organised?

The availability of AI tools at every step has greatly simplified everyday life. From applications that assist in content creation, whether written or in any other format.

For this reason, the question arises – are we lazy, or have we simply decided to better organise our free time?

This is a matter for each individual, and the easiest way to examine is to ask yourself whether you have ever consulted AI about choosing a film or music, or some activity that previously did not take much energy.

AI offers quick and easy solutions, which is certainly an advantage. However, on the other hand, excessive use of technology can lead to a loss of critical thinking and creativity.

Where is the line between efficiency and dependence if we rely on algorithms for everything? That is an answer each of us will have to find at some point.

A view on AI overload: How can we ‘break free from dependence’?

The constant reliance on AI and the comfort it provides after every prompt is appealing, but abusing it leads to a completely different extreme.

The first step towards ‘liberation’ is to admit that there is a certain level of over-reliance, which does not mean abandoning AI altogether.

Understanding the limitations of technology can definitely be the key to returning to essential human values. Digital ‘detox’ implies creative expression without technology.

Can we use technology without it becoming the sole filter through which we see the world? After all, technology is a tool, not a dominant factor in decision-making in our lives.

Ghibli trend enthusiasts – the legendary Hayao Miyazaki does not like AI

The founder of Studio Ghibli, Hayao Miyazaki, recently reacted to the trend that has overwhelmed the world. The creator of famous works such as Princess Mononoke, Howl’s Moving Castle, Spirited Away, My Neighbour Totoro, and many others is vehemently opposed to the use of AI.

Known for his hand-drawn approach and whimsical storytelling, Miyazaki has addressed ethical issues, considering that trends and the mass use of AI tools are trained on large amounts of data, including copyrighted works.

Besides criticising the use of AI in animation, he believes that such tools cannot replace the human touch, authenticity, and emotions conveyed through the traditional creation process.

For Miyazaki, art is not just a product but a reflection of the artist’s soul – something machines, no matter how advanced, cannot truly replicate.

For more information on these topics, visit diplomacy.edu.

X’s Türkiye tangle, between freedom of speech, control, and digital defiance

In the streets of Istanbul and beyond, a storm of unrest swept Türkiye in the past week, sparked by the arrest of Istanbul Mayor Ekrem İmamoğlu, a political figure whose detention has provoked nationwide protests. Amid these events, a digital battlefield has emerged, with X, the social media platform helmed by Elon Musk, thrust into the spotlight. 

Global news reveals that X has suspended many accounts linked to activists and opposition voices sharing protest details. Yet, a twist: X has also publicly rebuffed a Turkish government demand to suspend ‘over 700 accounts,’ vowing to defend free speech. 

This clash between compliance and defiance offers a vivid example of the controversy around freedom of speech and content policy in the digital age, where global platforms, national power, and individual voices collide like tectonic plates on a restless earth.

The spark: protests and a digital crackdown

The unrest began with İmamoğlu’s arrest, a move many saw as a political jab by President Recep Tayyip Erdoğan’s government against a prominent rival. As tear gas clouded the air and chants echoed through Turkish cities, protesters turned to X to organise, share live updates, and amplify their dissent. University students, opposition supporters, and grassroots activists flooded the platform with hashtags and footage: raw, unfiltered glimpses of a nation at odds with itself. But this digital megaphone didn’t go unnoticed. Turkish authorities pinpointed 326 accounts for the takedown, accusing them of ‘inciting hatred’ and destabilising order. X’s response? X has partially fulfilled the Turkish authorities’ alleged requests by ‘likely’ suspending many accounts.

The case isn’t the first where Türkish authorities require platforms to take action. For instance, during the 2013 Gezi Park protests, Twitter (X’s predecessor) faced similar requests. Erdoğan’s administration has long wielded legal provisions like Article 299 of the Penal Code (insulting the president) as a measure of fining platforms that don’t align with the government content policy. Freedom House’s 2024 report labels the country’s internet freedom as ‘not free,’ citing a history of throttling dissent online. Yet, X’s partial obedience here (selectively suspending accounts) hints at a tightrope walk: bowing just enough to keep operating in Türkiye while dodging a complete shutdown that could alienate its user base. For Turks, it’s a bitter pill: a platform they’ve leaned on as a lifeline for free expression now feels like an unreliable ally.

X’s defiant stand: a free speech facade?

Then came the curveball. Posts on X from users like @botella_roberto lit up feeds with news that X had rejected a broader Turkish demand to suspend ‘over 700 accounts,’ calling it ‘illegal’ and doubling down with a statement: ‘X will always defend freedom of speech.’ Such a stance paints X as a guardian of expression, a digital David slinging stones at an authoritarian Goliath.

Either way, one theory, whispered across X posts, is that X faced an ultimatum: suspend the critical accounts or risk a nationwide ban, a fate Twitter suffered in 2014

By complying with a partial measure, X might be playing a calculated game: preserving its Turkish foothold while burnishing its free-speech credibility globally. Musk, after all, has built X’s brand on unfiltered discourse, a stark pivot from Twitter’s pre-2022 moderation-heavy days. Yet, this defiance rings hollow to some. Amnesty International’s Türkiye researcher noted that the suspended accounts (often young activists) were the very voices X claims to champion.

Freedom of speech: a cultural tug-of-war

This saga isn’t just about X or Türkiye; it is an example reflecting the global tussle over what ‘freedom of speech’ means in 2025. In some countries, it is enshrined in laws and fiercely debated on platforms like X, where Musk’s ‘maximally helpful’ ethos thrives. In others, it’s a fragile thread woven into cultural fabrics that prizes collective stability over individual outcry. In Türkiye, the government frames dissent as a threat to national unity, a stance rooted in decades of political upheaval—think coups in 1960 and 1980. Consequently, protesters saw X as a megaphone to challenge that narrative, but when the platform suspended some of their accounts, it was as if the rug had been yanked out from under their feet, reinforcing an infamous sociocultural norm: speak too loud and you’ll be hushed.

Posts on X echo a split sentiment: some laud X for resisting some of the government’s requests, while others decry its compliance as a betrayal. This duality brings us to the conclusion that digital platforms aren’t neutral arbiters in free cyberspace but chameleons, adapting to local laws while trying to project a universal image.

Content policy: the invisible hand

X’s content policy, or lack thereof, adds another layer to this sociocultural dispute. Unlike Meta or YouTube, which lean on thick rulebooks, X under Musk has slashed moderation, betting on user-driven truth over top-down control. Its 2024 transparency report, cited in X posts, shows a global takedown compliance rate of 80%, but Türkiye’s 86% suggests a higher deference to Ankara’s demands. Why? Reuters points to Türkiye’s 2020 social media law, which mandates that platforms appoint local representatives to comply with takedowns or face bandwidth cuts and fines. X’s Istanbul office opened in 2023, signals its intent to play on Turkish ground, but the alleged refusal of government requests shows a line in the sand: comply, but not blindly.

This policy controversy isn’t unique to Türkiye. In Brazil, X faced a 2024 ban over misinformation, only to backtrack after appointing a local representative. In India, X sues Modi’s government over content removal in the new India censorship fight. In the US, X fights court battles to protect user speech. In Türkiye, it bows (partly) to avoid exile. Each case underscores a sociocultural truth: content policy isn’t unchangeable; it’s a continuous legal dispute between big tech, national power and the voice of the people.

Conclusions

As the protests simmer and X navigates Türkiye’s demands, the world watches a sociocultural experiment unfold. Will X double down on defiance, risking a ban that could cost 20 million Turkish users (per 2024 Statista data)? Or will it bend further, cementing its role as a compliant guest in Ankara’s house? The answer could shape future digital dissents and the global blueprint for free speech online. For now, it is a standoff: X holds a megaphone in one hand, a gag in the other, while protesters shout into the fray.

America’s Bitcoin gamble: A power play for financial dominance 

For years, the US government has maintained a cautious stance on cryptocurrency, often treating it as a regulatory challenge rather than an economic opportunity. Recent policy moves under President Donald Trump suggest that a dramatic shift is underway—one that could redefine the nation’s role in the digital asset space. During his pre-election campaign, Trump promised to create a Strategic Bitcoin Reserve, a move that generated significant excitement among crypto advocates. In the post-election period, a series of measures have been introduced, reflecting a deeper recognition of cryptocurrency’s growing influence. But are these actions bold steps towards financial innovation, or simply political manoeuvres designed to capture a rising economic trend? The answer may lie in how these policies unfold and whether they translate into real, lasting change for Bitcoin and the broader crypto ecosystem.

Digital Asset Stockpile: Has the promise of Bitcoin as a reserve been betrayed?

The first major step in this shift came on 23 January, when Trump signed an executive order promoting cryptocurrency and paving the way for the establishment of the US Digital Asset Stockpile. At first glance, this move appeared to be a groundbreaking acknowledgement of cryptocurrencies as valuable national assets. However, a closer look revealed that the stockpile was not focused on Bitcoin alone but included a mix of digital assets, all sourced from government seizures in criminal and civil procedures. This raised immediate concerns among Bitcoin advocates, who had expected a more direct commitment to Bitcoin as a reserve asset, as promised. Instead of actively purchasing Bitcoin to build a strategic reserve, the US government chose to rely solely on confiscated funds, raising questions about the long-term sustainability and intent behind the initiative. Was this a step towards financial innovation, or simply a way to repurpose seized assets without committing to a larger crypto strategy?

The ambiguity surrounding the Digital Asset Stockpile led many to doubt whether the US government was serious about adopting Bitcoin as a key financial instrument. If the goal was to establish a meaningful reserve, why not allocate funds to acquire Bitcoin on the open market? By avoiding direct investment, the administration sent mixed signals—recognising digital assets’ importance while hesitating to commit real capital. This move, while significant, seemed to fall short of the expectations set by previous pro-crypto rhetoric. 

America’s bold Bitcoin strategy could set off a global wave, reshaping the future of digital finance and economic power.

Strategic Bitcoin Reserve: A step towards recognising Bitcoin’s unique role

Just when it seemed like the US was betraying its promises to the crypto community, a new executive order emerged, offering a glimmer of hope. Many were initially disillusioned by the creation of the Strategic Bitcoin Reserve, which was to be built from confiscated assets instead of fresh, direct investments in Bitcoin. This approach raised doubts about the administration’s true intentions, as it seemed more focused on repurposing seized funds than on committing to Bitcoin’s long-term role in the financial system. However, the following executive order signalled a shift in US policy, opening the door to broader recognition of Bitcoin’s potential. While it might not have met the bold expectations set by early promises, it was still a significant step towards integrating cryptocurrency into national and global financial strategies. More importantly, it signalled a move beyond viewing all cryptocurrencies as the same, recognising Bitcoin’s unique position as a digital asset with transformative potential. This was a step further in acknowledging Bitcoin’s importance, distinct from other cryptos, and marking a pivotal moment in the evolution of digital finance.

White House Crypto Summit: Bringing legitimacy to the table

As these initiatives unfolded, the White House Crypto Summit added another layer to the evolving policy content. As the first event of its kind, it brought together industry leaders and policymakers in an unprecedented dialogue between government officials and crypto giants. This move was not just about discussing regulations—it was a strategic effort to strengthen the foundation for future pro-crypto actions. Consulting industry insiders provided a crucial opportunity to grasp the true nature of cryptocurrency before finalising legislative measures, ensuring that policies would be informed rather than reactive. By involving key industry players, the administration ensured that upcoming measures would be shaped by those who understand the technology and its potential. It was a calculated step towards framing future policies as collaborative rather than unilateral, fostering a more balanced approach to crypto regulation.

A new memecoin, Everything is Computer (EIC), has emerged following Trump’s viral comment, recording over $15 million in trading volume in a single day.

Bitcoin Act Unveiled: America is ready to HODL

And then, the moment the crypto community had been anticipating finally arrived—a decisive move that could reshape global crypto adoption. Senator Cynthia Lummis reintroduced the Bitcoin Act, a proposal to solidify Bitcoin’s place within the US financial system. Unlike executive orders that can be overturned by future administrations, this bill aimed to establish a permanent legal framework for Bitcoin’s adoption.

What made this proposal even more historic was its bold mandate: the US government would be required to purchase one million BTC over the next five years, a colossal investment worth around $80 billion at the time. To finance this, a portion of the Federal Reserve’s net earnings would be allocated, minimising the burden on taxpayers. Additionally, all Bitcoin acquired through the programme would be locked away for at least 20 years before any portion could be sold, ensuring a long-term commitment rather than short-term speculation. It seems like America is ready to HODL!

Trump’s crypto plan: Bringing businesses back to the US

Not just that—President Trump revealed plans to sign an executive order reversing Biden-era crypto debanking policies, a move that could significantly reshape the regulatory landscape if enacted. These policies have made it increasingly difficult for crypto businesses to access banking services, effectively cutting them off from the traditional financial system and driving many firms to relocate offshore.

If implemented, the reversal could have profound repercussions. By removing banking restrictions, the USA could become a more attractive destination for blockchain companies, potentially bringing back businesses that left due to regulatory uncertainty. Easier access to banking would give crypto businesses the stability they need, cutting out the risky loopholes they have had to rely on and making the industry more transparent.

For now, this remains a plan, but its announcement alone has already garnered strong support from the crypto community, which sees it as a critical step towards re-establishing the USA as a leader in digital asset innovation. Senator Cynthia Lummis stated, ‘By transforming the president’s visionary executive action into enduring law, we can ensure that our nation will harness the full potential of digital innovation to address our national debt while maintaining our competitive edge in the global economy.’

 Flag, Gold, American Flag

Global impact: How US measures could accelerate worldwide crypto adoption

This is not just a story about the USA; it has global implications. The effect of these measures goes beyond American borders. By officially recognising Bitcoin as a strategic asset and rolling back restrictive banking policies, the USA is setting an example that other nations may follow. If the world’s largest economy begins accumulating Bitcoin and incorporating it into its financial framework, it will solidify Bitcoin’s standing as a global reserve asset. This could prompt other countries to rethink their positions, fostering broader institutional adoption and possibly triggering a wave of regulatory clarity worldwide. Moreover, the return of crypto businesses to the USA might spark competition among nations to establish more attractive regulatory environments, speeding up innovation and mainstream adoption.

Simultaneously, these moves send a strong signal to global markets: the uncertainty surrounding the role of Bitcoin in the financial system is decreasing. With the USA taking the lead, institutional investors who were once cautious may gain more confidence to allocate substantial funds to Bitcoin and other digital assets. This could drive broader financial integration, positioning Bitcoin not just as a hedge against inflation or a speculative investment, but also as a central element in the future financial systems.

As nations compete to define the future of money, the true test will be whether the world can embrace a decentralised financial system or whether it will ultimately remain tethered to the traditional power structures. One thing is certain: it all comes down to who holds the power in the rise of cryptocurrency, as it will shape the economic relations of the future. 

For more information on these topics, visit diplomacy.edu.

The future of digital regulation between the EU and the US

Understanding the DMA and DSA regulations

The Digital Markets Act (DMA) and the Digital Services Act (DSA) are two major regulatory frameworks introduced by the EU to create a fairer and safer digital environment. While both fall under the broader Digital Services Act package, they serve distinct purposes.

The DMA focuses on ensuring fair competition by regulating large online platforms, known as gatekeepers, which have a dominant influence on digital markets. It prevents these companies from engaging in monopolistic practices, such as self-preferencing their own services, restricting interoperability, or using business data unfairly. The goal is to create a more competitive landscape where smaller businesses and consumers have more choices.

On the other hand, the DSA is designed to make online spaces safer by holding platforms accountable for illegal content, misinformation, and harmful activities. It imposes stricter content moderation rules, enhances transparency in digital advertising, and ensures better user rights protection. Larger platforms with significant user bases face even greater responsibilities under this act.

A blue background with yellow stars and dots

The key difference in regulation is that the DMA follows an ex-ante approach, meaning it imposes strict rules on gatekeepers before unfair practices occur. The DSA takes an ex-post approach, requiring platforms to monitor risks and take corrective action after problems arise. This means the DMA enforces competition while the DSA ensures online safety and accountability.

A key component of the DSA Act package is its emphasis on transparency and user rights. Platforms must explain how their algorithms curate content, prevent the use of sensitive data for targeted advertising, and prohibit manipulative design practices such as misleading cookie banners. The most powerful platforms, classified as Very Large Online Platforms (VLOPs) or Very Large Online Search Engines (VLOSEs), are also required to assess and report on ‘systemic risks’ linked to their services, including threats to public safety, democratic discourse, and mental well-being. However, these reports often lack meaningful detail, as illustrated by TikTok’s inadequate assessment of its role in election-related misinformation.

Enforcement is critical to the success of the DSA. While the European Commission directly oversees the largest platforms, national regulators, known as Digital Services Coordinators (DSCs), play a key role in monitoring compliance. However, enforcement challenges remain, particularly in countries like Germany, where understaffing raises concerns about effective regulation. Across the EU, over 60 enforcement actions have already been launched against major tech firms, yet Silicon Valley’s biggest players are actively working to undermine European rules.

Together, the DMA and the DSA reshape how Big Tech companies operate in the EU, fostering competition and ensuring a safer and more transparent digital ecosystem for users.

Trump and Silicon Valley’s fight against EU regulations

The close relationship between Donald Trump and the Silicon Valley tech elite has significantly influenced US policy towards European digital regulations. Since Trump’s return to office, Big Tech executives have actively lobbied against these regulations and have urged the new administration to defend tech firms from what he calls EU ‘censorship.’

 People, Person, Head, Face, Adult, Male, Man, Accessories, Formal Wear, Tie, Crowd, Clothing, Suit, Bride, Female, Wedding, Woman, Indoors, Elon Musk, Jeff Bezos, Sundar Pichai, Mark Zuckerberg, Laura Sánchez, Sean Duffy, Marco Rubio, Priscilla Chan, Doug Collins

Joel Kaplan, Meta’s chief lobbyist, has gone as far as to equate EU regulations with tariffs, a stance that aligns with the Trump administration’s broader trade war strategy. The administration sees these regulations as barriers to US technological dominance, arguing that the EU is trying to tax and control American innovation rather than foster its own competitive tech sector.

Figures like Elon Musk and Mark Zuckerberg have aligned themselves with Trump, leveraging their influence to oppose EU legislation such as the DSA. Meta’s controversial policy changes and Musk’s X platform’s lax approach to content moderation illustrate how major tech firms are resisting regulatory oversight while benefiting from Trump’s protectionist stance.

The White House and the House Judiciary Committee have raised concerns that these laws unfairly target American technology companies, restricting their ability to operate in the European market.

Brendan Carr, chairman of the FCC, has recently voiced strong concerns regarding the DSA, which he argues could clash with America’s free speech values. Speaking at the Mobile World Congress in Barcelona, Carr warned that its approach to content moderation might excessively limit freedom of expression. His remarks reflect a broader criticism from US officials, as Vice President JD Vance had also denounced European content moderation at a recent AI summit in Paris, labelling it as ‘authoritarian censorship.’

These officials argue that the DMA and the DSA create barriers that limit American companies’ innovations and undermine free trade. In response, the House Judiciary Committee has formally challenged the European Commission, stating that certain US products and services may no longer be available in Europe due to these regulations. Keep in mind that the Biden administration also directed its trade and commerce departments to investigate whether these EU laws restrict free speech and recommend countermeasures.

Recently, US President Donald Trump has escalated tensions with the EU threatening tariffs in retaliation for what he calls ‘overseas extortion.’ The memorandum signed by Trump on 21 February 2025, directs the administration to review EU and UK policies that might force US tech companies to develop or use products that ‘undermine free speech or foster censorship.’ The memo also aims at Digital Services Taxes (DSTs), claiming that foreign governments unfairly tax US firms ‘simply because they operate in foreign markets.’

 Pen, Adult, Male, Man, Person, People, Accessories, Formal Wear, Tie, donald trump

EU’s response: Digital sovereignty at stake

However, the European Commission insists that these taxes are applied equally to all large digital companies, regardless of their country of origin, ensuring fair contributions from businesses profiting within the EU. It has also defended its regulations, arguing that they promote fair competition and protect consumer rights.

EU officials see these policies as fundamental to Europe’s digital sovereignty, ensuring that powerful tech firms operate transparently and fairly in the region. As they push back against what they see as US interference and tensions rise, the dispute over how to regulate Big Tech could shape the future of digital markets and transatlantic trade relations.

Eventually, this clash could lead to a new wave of trade conflicts between the USA and the EU, with potential economic and geopolitical consequences for the global tech industry. With figures like JD Vance and Jim Jordan also attacking the DSA and the DMA, and Trump himself framing EU regulations as economic warfare, Europe faces mounting pressure to weaken its tech laws. Additionally, the withdrawal of the EU Artificial Intelligence Liability Directive (AILD) following the Paris AI Summit and JD Vance’s refusal to sign a joint AI statement raised more concerns about Europe’s ability to resist external pushback. The risk that Trump will use economic and security threats, including NATO involvement, as leverage against EU enforcement underscores the urgency of a strong European response.

Another major battleground is the AI regulation. The EU’s AI Act is one of the world’s first comprehensive AI laws, setting strict guidelines for AI transparency, risk assessment, and data usage. Meanwhile, the USA has taken a more industry-led approach, with minimal government intervention.

A blue flag with yellow stars and a circle of yellow stars

This regulatory gap could create further tensions as European lawmakers demand compliance from American AI firms. The recent withdrawal of the EU Artificial Intelligence Liability Directive (AILD) under US pressure highlights how external lobbying can influence European policymaking.

However, if the EU successfully enforces its AI rules, it could set a global precedent, forcing US firms to comply with European standards if they want to operate in the region. This scenario mirrors what happened with the GDPR (General Data Protection Regulation), which led to global changes in privacy policies.

To counter the growing pressure, the EU remains steadfast – as we speak – in enforcing the DSA, the DMA, and the AI Act, ensuring that regulatory frameworks are not compromised under US influence. Beyond regulation, Europe must also bolster its digital industrial capabilities to keep pace. The EUR 200 billion AI investment is a step in the right direction, but Europe requires more resilient digital infrastructures, stronger back-end technologies, and better support for its tech companies.

Currently, the EU is doubling down on its push for digital sovereignty by investing in:

  • Cloud computing infrastructure to reduce reliance on US providers (e.g., AWS, Microsoft Azure)
  • AI development and semiconductor manufacturing (through the European Chips Act)
  • Alternative social media platforms and search engines to challenge US dominance

These efforts aim to lessen European dependence on US Big Tech and create a more self-sufficient digital ecosystem.

The future of digital regulations

Despite the escalating tensions, both the EU and the USA recognise the importance of transatlantic tech cooperation. While their regulatory approaches differ significantly, there are areas where collaboration could still prevail. Cybersecurity remains a crucial issue, as both sides face growing threats from several countries. Strengthening cybersecurity partnerships could provide a shared framework for protecting critical infrastructure and digital ecosystems. Another potential area for collaboration is the development of joint AI safety standards, ensuring that emerging technologies are regulated responsibly without stifling innovation. Additionally, data-sharing agreements remain essential to maintaining smooth digital trade and cross-border business operations.

Past agreements, such as the EU-US Data Privacy Framework, have demonstrated that cooperation is possible. However, whether similar compromises can be reached regarding the DMA, the DSA, and the AI Act remains uncertain. Fundamental differences in regulatory philosophy continue to create obstacles, with the EU prioritising consumer protection and market fairness while the USA maintains a more business-friendly, innovation-driven stance.

Looking ahead, the future of digital regulations between the EU and the USA is likely to remain contentious. The European Union appears determined to enforce stricter rules on Big Tech, while the United States—particularly under the Trump administration—is expected to push back against what it perceives as excessive European regulatory influence. Unless meaningful compromises are reached, the global internet may further fragment into distinct regulatory zones. The European model would emphasise strict digital oversight, strong privacy protections, and policies designed to ensure fair competition. The USA, in contrast, would continue to prioritise a more business-led approach, favouring self-regulation and innovation-driven policies.

big tech 4473ae

As the digital landscape evolves, the coming months and years will be crucial in determining whether the EU and the USA can find common ground on tech regulation or whether their differences will lead to deeper division. The stakes are high, affecting not only businesses but also consumers, policymakers, and the broader future of the global internet. The path forward remains uncertain, but the decisions made today will shape the structure of the digital world for generations to come.

Ultimately, the outcome of this ongoing transatlantic dispute could have wide-reaching implications, not only for the future of digital regulation but also for global trade relations. While the US government and the Silicon Valley tech elite are likely to continue their pushback, the EU appears steadfast in its determination to ensure that its digital regulations are enforced to maintain a fair and safe digital ecosystem for all users. As this global battle unfolds, the world will be watching as the EU and USA navigate the evolving landscape of digital governance.

OEWG’s tenth substantive session: Entering the eleventh hour

The UN Open-Ended Working Group (OEWG) on the security of and in the use of information and communications technologies in 2021–2025 held its tenth substantive session on 17-21 February 2025. 

Some of the main takeaways from this session are:

  • Ransomware, AI and threats to critical infrastructure remain the biggest concerns countries have regarding the threat landscape. Even as countries don’t agree on an exhaustive list of threats or their sources, there is a strong emphasis on collective and cooperative responses such as capacity building and knowledge sharing to reduce the risk of these threats, and mitigate and manage these threats.
  • The long-standing debate between implementing existing norms and developing new ones continued. However, this session saw ASEAN countries take a more pragmatic approach, emphasising concrete steps toward implementing agreed norms while maintaining openness to discussing new ones in parallel. At the same time, the call from developing countries for greater capacity-building gained momentum, underscoring the challenge of implementing norms without sufficient resources and support.
  • The discussions on international law have shown little progress in drawing closer between the positions states hold — there is still no consensus on the necessity of new legally binding regulations for cyberspace. There is also discord on how to proceed with discussing international law in the future permanent UN mechanism on cybersecurity.
  • Discussions on confidence-building measures (CBMs) were largely subdued, as few new CBMs were introduced, and states didn’t overly detail their POC Directory experience. Many states shared their CBM implementation, often linked to regional initiatives and best practices, showing eagerness to operationalise CBMs. It seems that states now anticipate the future permanent mechanism to serve as the forum for detailed CBM discussions.
  • The Voluntary Fund and the Capacity-Building Portal have increasingly been regarded as key deliverables of the OEWG process. However, states remain cautious about the risk of duplicating existing global and regional initiatives, and a clear consensus has yet to emerge regarding the objectives of these deliverables.
  • States are still grappling with the questions of thematic groups and non-state stakeholder engagement in the future permanent mechanism. The Chair’s upcoming reflections and townhalls are likely to get the ball rolling on finding elements for the future permanent mechanism acceptable to all delegations.

As negotiations are entering the eleventh hour ahead of the OEWG’s eleventh session, consensus remains elusive. Tensions ran high since the first day, with attributions of cyberattacks and rights of reply denouncing those attributions taking centre stage. The states held tightly to their positions, largely unchanged since the last session in December 2024. The Chair pointed out that direct dialogue was lacking, with participants instead opting for a virtual town hall approach—circulating their positions and posting them on the portal, and reminded delegates that whatever decisions to be made would be made by consensus, urging them to demonstrate flexibility.

Threats: Collective action is key
 Text, Device, Grass, Lawn, Lawn Mower, Plant, Tool, Gun, Weapon

The discussions at this session revealed a range of national perspectives on cybersecurity threats. Malicious use of AI, critical infrastructure attacks and ransomware remained central concerns.

Collective solutions for cyber threats

The consensus remained clear throughout the discussions: cyber threats are a shared challenge requiring collective solutions. 

Nigeria underscored the importance of a comprehensive international framework to harmonise responses to cyber threats. Collaboration between state Computer Emergency Response Teams (CERTs), strategic planning, and continuous monitoring of emerging threats were highlighted as essential components. Albania reinforced the value of cooperative approaches in incident management, warning that cyberattacks could escalate tensions if misattributed. Albania also advocated for robust diplomatic dialogue through strengthened communication channels among CERTs and intelligence-sharing agreements. Uruguay and Argentina underscored the need for knowledge transfer and shared expertise in identifying and responding to cyber threats. Malaysia and South Africa further emphasised that fostering collaboration among technical experts, academia, and government officials would enhance cybersecurity preparedness. Bosnia and Herzegovina emphasised resilience-building through strategic communication and public awareness. 

Capacity building remained a priority for developing nations. Mauritius and Malawi stressed the urgent need for technical assistance, funding, and training to strengthen cybersecurity frameworks in regions facing resource constraints. Indonesia echoed this sentiment, advocating for increased knowledge sharing and technical cooperation to collectively address evolving threats. Nigeria advocated for capacity building in developing nations to reduce technological dependency and improve cybersecurity defences. Ghana called for greater investment in cybersecurity research and innovation to bolster national defences. 

Australia pointed to cyber sanctions as a means to deter malicious actors and impose tangible consequences on cyber criminals. Switzerland, focusing on the increasing threat of ransomware, stressed the need for states to uphold international law, reinforce resilience, and enhance international cooperation.

A particular concern was the spread of misinformation and disinformation, which Nigeria suggested should be countered through the circulation of accurate information without infringing on freedom of expression.

Final Report: How to best reflect discussions on threats

Several delegations emphasised key issues for inclusion in the OEWG final report. The EU, Croatia, New Zealand, and South Korea supported continued references to ransomware. 

China’s concerns for the final report include the risks of politicizing cybersecurity and ICT, which threaten global cooperation and digital integrity. It also highlights the rising cyber tensions conflict, particularly with offensive strategies and attacks on critical infrastructure. China stresses the importance of addressing false claims about cyber incidents, which harm trust between nations. It calls for secure ICT supply chains and the prevention of backdoors in products. 

China advocated for a comprehensive, evidence-based approach to data security in the AI era, focusing on data localisation and cross-border transfer issues. Malaysia supported China on the importance of addressing data security which should be included in the final report. 

El Salvador urged that the annual reports reflect the importance of safe and transparent data management throughout the whole life cycle with practices that protect privacy, particularly relevant for generative AI models, which Malaysia supported. 

El Salvador also believes that it’s essential that the report includes a reference to the development of cryptographic standards that are resistant in the quantum era, which Czechia echoed.

The future permanent mechanism: How to tackle discussions on threats

As discussions moved toward the future of global cybersecurity governance, the EU proposed a dedicated thematic group under the Program of Action (PoA) to systematically assess threats, enhance security, and coordinate capacity-building efforts. The USA and Portugal reinforced the urgency of this initiative, calling for a flexible yet permanent platform to address cyber threats, particularly ransomware.

Several countries stressed the importance of sector-specific security measures. Malaysia highlighted the need to tailor protections for different industries, while Mexico advocated for harmonised cybersecurity standards and multistakeholder cooperation across the digital supply chain. Mauritius and Malawi reaffirmed the importance of upholding international cyber norms, with Malawi emphasising continued dialogue within the UN Open-Ended Working Group (OEWG).

Australia and Canada pushed for linking emerging threats to responsible state behaviour under international law, with Canada calling for thematic groups to enable deeper discussions beyond general plenary meetings. Switzerland and Germany agreed, underscoring the need to first establish a shared understanding of threats before implementing coordinated responses. France called for shifting from merely identifying threats to actively developing solutions, proposing that expert briefings guide working group discussions.

AI security also emerged as a key concern. Malaysia stressed the role of AI developers in cybersecurity, while Argentina highlighted the private sector’s responsibility in addressing AI-related threats. Italy pointed to the recent Joint High-Level Risk Analysis on AI, which provides recommendations for securing AI systems and supply chains.

Norms, rules and principles: A near standstill in discussions
 Body Part, Hand, Person, Aircraft, Airplane, Transportation, Vehicle, Handshake

Need for new norms vs. implementation of existing ones

The divide persists between states that prioritise implementing the agreed norms (e.g. Japan, Switzerland, Australia, Canada, South Korea, Kazakhstan) and those advocating for new, legally binding rules (e.g. Russia, Pakistan, Cuba). The former group argued that introducing new norms without fully implementing current ones could dilute efforts, while the latter believes that voluntary norms lack accountability, particularly in crises. Italy specifically called for full implementation of existing cyber norms before introducing new ones. 

Among the new norms proposed, Kazakhstan proposed a ‘norm-on-zero trust’ approach, emphasising continuous verification and access controls, although it acknowledged the need to prioritise implementing agreed norms. El Salvador repeated its proposal to update norm E regarding privacy and personal data. China highlighted that existing norms do not cover data security, while Vietnam called for new norms to address emerging technologies and the digital divide.

Some states didn’t propose new norms but sought fresh perspectives on existing ones. The UK suggested categorising the 11 norms into three themes: Cooperation (Norms A, D, H), Resilience (Norms G, J), and Stability (Norms B, C, E, F, I, K). France and the UK also reiterated the need for Norm I to address the non-proliferation of malicious tools. Portugal emphasised the importance of a common understanding of due diligence. Italy prioritised supply chain security, advocating for measures like ICT supply chain security assessments, Software Bills of Materials (SBOMs), national security evaluation centres, and cybersecurity certification schemes.

Some countries (e.g. Malaysia and Brazil) proposed a balanced approach, supporting both the implementation and development of norms. The EU and the USA stressed that negotiations on binding agreements could be resource-intensive and counterproductive. Iran counter-argued that a uniform approach to norm implementation is impractical due to each nation’s unique circumstances. Nicaragua and Pakistan contended that non-binding norms fail to address emerging threats effectively, while China pointed out that the 2015 UN GGE report allows for developing additional norms over time.

Capacity-building as a critical component for cyber norms implementation

Many states, particularly Singapore, Indonesia, Pakistan, and Mauritius, emphasised that implementing cyber norms requires bridging the technical gap between developed and developing nations. Iran and Cuba noted that resource constraints hinder developing countries. Kenya and South Africa advocated for integrating long-term capacity-building into the future UN cyber mechanism to improve norm implementation. Kenya highlighted the challenges posed by varying technical expertise among states. For example, Norm C, which prohibits allowing territory for wrongful acts, requires specific tools and skills not all countries possess.

Singapore argued that each norm has policy, operational, technical, legal, and diplomatic aspects, and developing the capacity to implement these norms is essential for identifying gaps and determining the need for new norms. In this context, the ASEAN-Singapore Cybersecurity Centre of Excellence will launch a series of capacity-building workshops called ‘Cyber Norms in Action.’ 

Voluntary checklists: Cyber norms implementation 

The voluntary checklist is broadly supported as a tool for operationalising agreed cyber norms. Countries (e.g. Colombia, Japan, and Malaysia) view it as a ‘living document’ that should evolve with the evolving landscape of cyber threats. Kazakhstan suggested incorporating best practices for incident response and public-private collaboration.

Despite this support, some countries remain sceptical. Cuba and Iran cautioned against using the checklist as a de facto assessment tool for evaluating states’ cybersecurity performance. China insisted that the checklist remains within the UN information security framework to maintain neutrality. Iran proposed delaying negotiations on the checklist until a broader consensus is reached under a permanent UN cyber mechanism.

An important aspect of the checklist is its potential to promote inclusive cybersecurity governance. The UK, Brazil, and the Netherlands stressed the need to integrate a gender perspective, ensuring that the implementation of cyber norms considers the disproportionate impact on women and vulnerable communities. 

International law: Little progress made
 Accessories, Bag, Handbag, Scale

The discussions on international law have shown little progress in drawing closer between the positions. The states have made suggestions on how to capture the progress of the OEWG 2021-2025 in its Final Report and shared opinions on the structure and content of discussions on international law within the future permanent mechanism.

The persistent rift: The need for a new legally binding framework

In the substantive positions of the states on international law, the rift remains between the states that do not see a need for a new legally binding framework and those that do

The majority of states (Sweden, the EU, the Republic of Korea, the UK and others) do not see the need for a new legally binding framework and emphasise the need to discuss the application of existing international law in cyberspace. In rushing to discuss new legally binding obligations, the UK sees the risk of undermining the application of core, foundational rules of international law, including the UN Charter.

Cuba, China, Russia, Pakistan, and the Islamic Republic of Iran reiterated their positions, stating that the new legally binding mechanism is necessary to prevent interstate conflicts in cyberspace and to contribute to strengthening cooperation in this area. China has supported the Russian Draft Convention on International Information Security as a good basis for discussions. At the same time, Pakistan and Iran stated that there are gaps in international law that need to be addressed by binding rules.

Despite the Chairs’ call to states to find flexibility in their statements in December 2024 and time pressure, the statements on both sides are repeats of the positions voiced in the past substantive sessions. 

These differences directly translate to the language that the states were proposing to be included in the 2021-2025 OEWG Final Report, as well as positions on how to structure the Future Permanent Mechanism. 

Final Report: How to best reflect progress

States have discussed the proposals on how to best reflect the progress in the 2021-2025 OEWG on international law in its Final Report, as it will serve as a summary of the efforts, positions, and basis for the negotiations within the future permanent mechanism. 

The states predominantly concluded that the OEWG was a successful process and contributed to a greater understanding of international law in cyberspace. Specifically, states (Austria, Sweden, Brazil, Senegal, Canada, Thailand, Czechia, EU, Vanuatu, Switzerland, Australia, Germany and others) saw progress in a number of published national and regional positions on the applicability of international law in cyberspace in the course of the 2021-2025 OEWG. 

There were also specific wording suggestions for inclusion in the Final Report. The Joint Statement on International Law (Australia, Chile, Colombia, the Dominican Republic, El Salvador, Estonia, Fiji, Kiribati, Moldova, the Netherlands, Papua New Guinea, Thailand, Uruguay and Viet Nam) gained support from Czechia, Canada, Switzerland, United Kingdom, Republic of Moldova, Ireland, and others. The re-published paper, now with more co-sponsors, offers a convergence language for the Final Report that includes peaceful settlement of disputes, respect for international human rights obligations, the principle of state responsibility, and application of international humanitarian law to ICT activities during armed conflicts.

Another wave of proposals was focused on including a clear reference to the applicability of international humanitarian law and the fundamental legal principles of humanity, neutrality, necessity, proportionality, and distinction in the Final Report, supported by Sweden, the USA, the Republic of Korea, Malawi, Senegal, the EU, Tonga on behalf of the Pacific Island Forum, Australia, Germany, Republic of Moldova, Ireland, Ghana, Austria, and others. Just like in the 9th OEWG substantive session in December 2024, the Resolution on protection for the civilian population against the humanitarian consequences of the misuse of digital technologies in armed conflict within the framework of the 34th International Red Cross and Red Crescent Conference resonated with the states. 

Brazil has referred explicitly to the Operative Paragraph 4 of that Resolution (‘states recalled that in situations of armed conflict, international humanitarian law rules and principles serve to protect civilian populations and other protected persons and objects, including against the risks arising from ICT activities’) to be included in the Final Report. Canada, France, Netherlands, Czechia, and others supported this proposal. 

Switzerland, which sees the inclusion of the applicability of international humanitarian law as a priority, has also proposed a specific wording for the Final Report that builds on the 34th ICRC resolution and includes medical and humanitarian facilities.

States also called for stronger wording on the applicability of human rights law (Australia, Albania, Malawi, Mexico, Mozambique, Moldova, North Macedonia, Senegal, Switzerland, Thailand, and Germany) in the Final Report.

Cuba and Iran believe the Final Report should include references on setting up a legally binding instrument, and definitions of terms and technical mechanisms.

The future permanent mechanism: How to tackle international law

States further discussed ways that the discussions on international law would be incorporated and framed within the Future Permanent Mechanism. 

States reflected on Annex C of the Chair’s Discussion Paper on Draft Elements on Stakeholder Modalities and Dedicated Thematic Groups of the Future Permanent Mechanism, which proposed a dedicated thematic group on rules, norms and principles of responsible state behaviour and on international law. Mexico, Colombia, Indonesia, and Algeria endorsed the thematic group dedicated both to norms and international law, as they see these as complementary and contributing to safety and security.

Others, such as Sweden, the EU, Czechia, Brazil, and the USA, did not support the Chairs’ proposal to create one thematic group for norms and international law due to the voluntary nature of norms and binding nature of international law and combining these discussions posing a risk conflating distinct legal and policy concepts, that could hinder progress in both areas. 

Canada proposed integrating international law into each of the first three thematic working groups set out in the Chairs’ discussion paper (building resilience, enhancing cooperation in the management of ICT-related incidents, including through CBMs, and preventing conflict and increasing stability in the ICT sphere) to build common understandings on how international law applies to practical policy challenges. Thematic group meetings could include expert briefings on technical and legal topics and scenario-based discussions.

The states have deepened discussions on the Program of Action proposed by France, which seeks to incorporate discussions on international law in a cross-cutting manner in three action-oriented thematic groups: on building resilience, cooperation in the management of ICT-related incidents, and prevention of conflict and increasing stability in cyberspace. This approach was supported by Sweden, Portugal, Czechia, the UK, the EU, Albania, Australia, Germany, Ireland and others. The PoA also foresees the inclusion of non-state experts in cybersecurity, to which the EU and North Macedonia specifically expressed their support. 

In addition to the two proposals above, several states have voiced additional proposals.  

Switzerland generally supported the thematic and cross-cutting working groups as proposed by France but voiced concern that it might not be sufficient for in-depth discussions on international law. Switzerland considers it better if the discussion on the implementation of norms would occur in the cross-cutting working groups. In contrast, the discussion on the application of international law would benefit from a specific forum. 

The USA believes that the states are ready to integrate discussion into practical, thematic working groups oriented toward addressing specific, real-world concerns to international peace and stability and focusing on practical tools. 

Senegal recalled the equal importance and relevance of the five pillars of the OEWG mandate and would be willing to discuss adding a pillar on the application of international law. 

Iran, China and Russia see as a priority within the future permanent mechanism to initiate a substantive discussion on developing legally binding obligations in the ICT field and have a dedicated thematic group on international law. These states do not support the participation of non-state experts in the discussions. 

Ireland does not consider a thematic group on international law necessary or desirable. Their concern would be that such a group could be stifled by being overly outcome-focused and that it would duplicate efforts and divert resources and attention from more dynamic engagement on legal issues within the other thematic groups. Conversely, Egypt sees the need for a dedicated platform on international law in the future permanent mechanism and is sure that the modalities, mandate, structure, and types of discussions can be agreed on by consensus. Egypt sees the discussion as reshaping the content of international law and underscores the need to have a place within the UN to have a multilateral conversation with the participation of stakeholders.

The role of capacity building in fostering a better understanding of states on how international law applies to cyberspace and contributes to promoting peace, security, and stability in cyberspace was underscored by Tonga on behalf of the Pacific Island Forum, Viet Nam, Kenya, Ghana, Canada, Thailand, UK, France, Colombia, and many others.

CBMs: Looking forward to the permanent mechanism
 Stencil, Text

A more subdued CBMs discussion at this session seems to suggest that states now anticipate the future permanent mechanism to serve as the forum for detailed CBMs discussions. Kazakhstan suggested that addressing subtopics, such as standardised incident response procedures, would be more effective within thematic groups engaged in detailed discussions rather than plenary sessions. Some states voiced a cross-cutting approach to discussing CBMs more efficiently in the permanent mechanism, such as Germany proposing to address CBM 3, 5 and 6 under the single umbrella of the resilience of critical infrastructures.

While the previous session had already seen a decline in the discussion of additional CBMs, only Iran circulated a working paper proposing a new CBM to ensure unhindered access to a secure ICT market for all, aiming to foster global trust and confidence. No other state engaged with this proposal; Germany merely remarked that it might be more appropriately framed as a norm, given its reference to expectations or obligations.

The deliberations on standardised templates further exemplify the subdued nature of this session’s CBM discussions. South Africa, with Brazil‘s support, reiterated its proposal for a template encompassing a brief description of the assistance required, details of the cyber incident, acknowledgement of receipt by the requested state, and indicative response timeframes. Thailand emphasised the necessity for a flexible template, while Korea underscored that it should serve as a communication reference without imposing constraints on interactions. Finally, Kazakhstan reiterated its proposal to have specific templates for different scenarios, such as incident escalation, threat intelligence sharing and cyber capacity-building requests. The Secretariat is anticipated to produce such a standardised template by April 2025. In related matters, Mauritius proposed the development of secure communication platforms for exchanging information on cyber incidents.

This contrasts with the dynamic CBM landscape at the regional level, where numerous states shared their CBM implementations (the United Kingdom, Albania, Korea, Canada, Ethiopia, North Macedonia, Kenya, and the OSCE) often linked to regional initiatives and best practices (Tonga, Bosnia and Herzegovina, Thailand, Ghana, Brazil, Dominican Republic, Philippines). This further illustrates states’ eagerness to advance the operationalisation of CBMs.

The POC: Finally ripe for the picking?

As of the 10th session, 116 states have joined the Points of Contact (POC) Directory—an increase of five since December—registering nearly 300 diplomatic and technical POCs. The Secretariat shared conclusions from the December ping test and provided a detailed overview of the upcoming scenario-based exercise scheduled for March 10–11 and March 17–18, 2025. The Russian Federation actively encouraged remaining member states to participate in the POC Directory, promoting its guidelines on designating UN technical POCs and supporting a UNIDIR seminar aimed at achieving universal participation in the directory.

While most states remained silent regarding the ping test outcomes and their experiences with the POC Directory, three nations expressed dissatisfaction. Russia reiterated concerns about the inactivity of certain POCs and the insufficient authority of some technical POCs, which hampers their ability to respond to Russian notifications—echoing points raised during the 9th session’s CBM discussions. Germany and France jointly addressed issues with a specific state’s use of the POC Directory, noting that their technical POCs received notifications about malicious cyber activities linked to IP addresses in their respective countries. They recommended redirecting these requests to appropriate national authorities; however, identical requests continued to be sent to their technical POCs. This behaviour, they argued, contradicts the principle that the POC Directory should complement existing CERT-to-CERT channels designed for such requests. 

Without directly referencing these situations, China observed that, given the voluntary nature of the POC Directory, member states are free to determine the functions of their POCs, as well as the types and channels of messages they handle. This scenario highlights a broader lack of clear, consensual understanding regarding the POC Directory’s intended use. Mauritius emphasised the need to define clear thresholds for reportable incidents, while Cuba stressed the importance of detailing circumstances under which information exchange should occur. On a side note, the EU proposed that the private sector could participate in the POC directory.

Towards a more integrated approach: CBMs and capacity-building

Most states reaffirmed that capacity-building is a prerequisite to CBM implementation (Kazakhstan, Tonga, Russia, Thailand, Malawi, Laos, Ghana). Cuba and India voiced their interest in gathering the POC Directory in the global portal for capacity-building as a central access point and a core knowledge hub for resources. Pakistan argued that the POC Directory goes beyond crisis management but is a foundation for broader collaboration, including capacity-building. 

Capacity building: Positive feedback but uncertain objectives
 Art, Drawing, Doodle

Just like CBMs, the capacity-building agenda item is resolutely oriented towards pragmatic discussions and the 10th session proved again to be a privileged forum for member states to share their national and regional practices and initiatives (the EU, Columbia, Singapore, Bosnia and Herzegovina, Poland, Korea, Thailand, Canada, Israel, Albania, Japan, Morocco, Oman, Ukraine, Russia).

Among these experiences, an important number of states specifically highlighted the benefits of various fellowships (Kuwait, Iran), among which the Women in International Security and Cyberspace Fellowship (Mauritius, Ghana, Albania, Kazakhstan, Democratic Republic of Congo, Samoa, Paraguay, El Salvador) and the UN-Singapore Fellowship (Mauritius, Ghana, Albania, Nigeria, Democratic Republic of Congo). In that vein, Nigeria and Kuwait proposed to hold new fellowship programs under the auspices of the UN, similar to other UN fellowships related to international security matters.

Cyber-capacity building on a budget

One main discussing item was the Secretariat’s paper about the Voluntary Fund. An important number of states expressed their support for the fund (El Salvador, Columbia, South Africa, Rwanda, Morocco, Zimbabwe, Brazil, Kiribati, Cote d’Ivoire, Ecuador, Fiji and the Democratic Republic of Congo) and consensus largely emerged on the need to not duplicate existing funding initiatives and reflect on its link with the World Bank Multi-Donor Trust Fund (Germany, European Union, Kuwait, Australia). France specifically questioned whether the UN was a fit structure to support such capacity-building activities, and argued that it could be better positioned to play a role in linking existing initiatives. 

Western countries shared their capacity-building initiatives and specifically addressed the issue of costs. The Netherlands voiced the need to consider the cost efficiency of this initiative, and Canada asked for a more detailed budget, given that the costs presented are higher than those for similar activities that Canada usually finances. Australia reminded the audience that a new trust fund does not mean new money and that it could not support the proposal under its current formulation.

A large share of countries nevertheless positioned themselves in favour of open contributions from interested stakeholders other than member states, such as the private sector, NGOs, academia or philanthropic foundations (Argentina, Paraguay, Malawi, Mauritius, Nigeria, Mexico). Yet, Russia voiced its wariness concerning NGOs and companies sponsoring the fund as they may attempt to exert pressure.

Cuba and Iran warned against the constraining aspect of the fund. Iran specified that the principles guiding capacity-building mentioned in paragraph 10 did not enjoy consensus among member states and warned against attempts to condition capacity-building activities on the adoption of norms.

A portal, sure – but what for?

A second pivotal discussion item was the Secretariat’s paper about the development of a dedicated portal for cooperation and capacity-building based on a proposal made by India and member states’ views. Again, positions were consensual on the idea of a portal (Columbia, United Kingdom, Morocco, Oman, Zimbabwe, Ecuador, Nigeria, El Salvador, South Africa, Rwanda). Consensus also emerged around the fact that it should not duplicate the already existing portals and initiatives, such as UNIDIR Cyber Policy Portal and the Global Forum on Cyber Expertise (GFCE) Cybil Knowledge Portal (Fiji, Mexico, Tonga, Latvia, Mauritius, Germany, France, Samoa, Indonesia, Switzerland, Brazil, Mexico, Argentina, the Netherlands). 

Some delegations tackled the issue in a very pragmatic way. Korea questioned whether simply including direct links to existing portals was appropriate (supported by the UK) and proposed to have a technical review of the integration of the portal, including the POC directory into a new portal to establish an integrated platform (backed by Malaysia). Latvia reflected on potential existing administrative limitations and UN procurement rules about linkages with other websites, based on a previous IGF experience. 

The Secretariat wrapped up this discussion by specifying that the sections pertaining to the technical and administrative requirements were coordinated with ICT office in charge of UN-hosted platforms and websites and encouraged Member States to take a closer look at these sections. Still pertaining to pragmatic questions, Mauritius and India proposed that the portal be multilingual.

The level of publicity of the portal was also discussed. Korea and Kazakhstan proposed that the portal remain fully accessible to the public. Other states introduced nuance in the publicity. The Netherlands asked for the POC directory to remain accessible to member states only, whereas Cote d’Ivoire proposed that only modules 1 and 5 (respectively, the repository of documents and resources and the platform for exchange of information, including the potential participation of non-governmental entities) could be made public. India further suggested 3 levels of access: member states, stakeholders and the general public.

A major point of contention remains the exact content of this portal. Some states reaffirmed an incremental approach to the content of the portal (Kazakhstan, the EU, Australia), starting with basic functionalities, without necessarily specifying what those basic functionalities should be. China and Russia specifically warned against the use of the portal to facilitate information sharing regarding response to threats and incidents.

Indonesia suggested a specific section for stakeholders to share their own best practices, research papers etc., whereas Russia asked for NGO contributions to be published only for state information. On a side note, Cote d’Ivoire proposed to have a publication of an indicative quarterly or annual calendar along with the monthly publication of capacity-building initiatives and events.

The future permanent mechanism: How to tackle capacity building

States also tackled the structuration of capacity-building discussions within the future permanent mechanism. Iran, Argentina, Brazil and Paraguay supported the proposal to have a dedicated working group on capacity-building, as circulated in the chair’s discussion paper. A vast majority of states have defended a cross-cutting approach to capacity-building, with this agenda item being discussed across thematic groups (Tonga, Vanuatu, Canada, Kazakhstan, Kiribati, Ireland, Ukraine, Fiji). 

Some delegates proposed mixed approaches, such as the EU and Australia’s similar view that thematic groups can help identify gaps and specific challenges pertaining to capacity-building, and that these reflections can fuel a horizontal capacity-building discussion in plenary. Indonesia suggested that the thematic groups were the place to focus on technical recommendations rather than duplicating high-level policy discussions. In that vein, Indonesia also suggested establishing terms of reference to frame these discussions.

Finally, states expressed their support for the organisation of high-level panels such as the Global Roundtable on ICT Capacity Building held in May 2024 (the UK, Morocco, Zimbabwe, Kazakhstan, Ukraine, Germany). Thailand recommended that such high-panels be held on a biannual basis, and Australia suggested considering them as a ‘capacity-building exposition’. Canada argued that it should be held at other levels than the ministerial-level to distinguish it from plenary work. It further proposed that it could be a venue for beneficiaries to meet organisations deploying capacity-building activities. The chair recalled the initial scepticism around this initiative but recommends that in the Final Report a decision should be made about the next Global Roundtable.

Regular institutional dialogue: Consensus distant
 Accessories, Sunglasses, Text, Handwriting, Glasses

The agenda item on the regular institutional dialogue captured the most attention of the 10th substantive session — more than 60 delegations in total spoke on this issue. This is not surprising: the current OEWG’s mandate ends in July, and the Chair still does not have a sense of general consensus on what the future permanent mechanism will be. The statements of the countries showed that few states are ready to make concessions and be flexible in discussing the modalities of multistakeholder participation in the future permanent mechanism as well as its architecture.

As the delegations began to repeat their positions from last year, a sharp intervention from the Chair warned them that there is very little time left until the OEWG’s mandate ends, and if states do not want to disrupt the process that has been going on for more than 20 years, then they must make an effort and think about where they can be flexible in their positions.

The Chair also cautioned against equating the future permanent mechanism with either the OEWG or the PoA, noting that some participants remain attached to these frameworks. Instead, the future permanent mechanism should be seen as a synthesis of various proposals, including elements from both the OEWG and the PoA. The Chair pointed out the high risk of not having a consensus on the future permanent mechanism in the end and ‘the risk is even higher than ever before in this 5-year process’. 

The long-running issue of multistakeholder modalities 

The problem of stakeholder participation remained to be the hottest one. A lot of European and South American states, as well as Canada, put together a joint proposal to make the accreditation a more transparent process with disclosure of the basis for objections, and mechanisms to provide participation to more stakeholders as possible. The main principle is ‘to have a voice, not a vote’. Their argument was that stakeholders can serve as experts, especially in thematic groups whose work requires a deeper dive into the issues on the table. Some states advocated for giving the floor to stakeholders during plenaries too.

On the contrary, Russia and other like-minded states were insisting on keeping the already agreed OEWG modalities. The non-objection rule must be in place, but this group of states see the option to disclose the reasons for objection as a violation of the sovereign right of a state. They are also opposed to letting the Chair discuss the accreditation of a particular stakeholder with other states to overcome a veto by voting or any other procedure. Additionally, they don’t like the idea that stakeholders who had received objections be designated as provisional participants.

Another point was to seek already existing modalities for participation, and states recalled the Ad-hoc committee on Cybercrime, but Iran said it was not suitable since it was a temporary body with a specific mandate and limited working period.

The many proposals for thematic groups

The topic that brought the most variations to the discussion was the number and scope of dedicated thematic groups. Some of the proposals were:

  • to keep the ‘OEWG pillars’ structure and have the same groups, but that raised concerns about whether it will duplicate the plenaries. 
  • to merge some groups and introduce new ones (Chair’s proposal)
  • to have three thematic cross-cutting thematic groups on resilience, cooperation, and stability (France)
  • to have three groups on threat prevention and response, application of IL and existing and future norms and capacity building (the African group of states).

The majority of states voiced the option to have a special group on capacity building or provide for practical discussions in capacity building across other groups that will be created. 

Also, there was a discussion on whether to create a dedicated group on international law or combine international law with norms. This idea was criticised by the USA, Russia, Israel, and Germany since it merges two distinct areas of binding and voluntary regulation. Switzerland suggested discussing international law as a cross-cutting issue though all groups similar to capacity building. 

Additionally there were thoughts on creating a dedicated group on prevention of conflicts and a dedicated group on critical infrastructure, but they didn’t meet a lot of supporters.

As for the French proposal, which was upheld by the EU member states, the ‘cross-cutting policy-issue-focused working groups’ would go deeper on each OEWG pillar in a balanced way and then would feed it back into the plenary, which is structured the same way as the current OEWG.

The Chair intervened in the middle of the discussion asking the delegates to stop thinking in a binary way: to have either pillars approach or cross-cutting for the thematic groups and contemplate how to combine them all together. 

Some states, as well as the Chair, reminded that the thematic groups do not have to be cemented right now, and there is an option to have shifting agendas, as well as the creation of ad-hoc ones, and rearrangement of the groups after the first review conference of the future permanent mechanism.

Overall, the general impression is that states are inclined to have three groups rather than five to meet the concerns of smaller delegations. 

The format of thematic groups: hybrid or in-person

Delegations also expressed concerns about whether the format will be hybrid or in-person only. Both options have advantages, but some states are worried about limited resources for delegations to attend group meetings and plenaries in New York. In contrast, others question whether the hybrid format will be suitable for formal meetings and provide for closer bilateral and group engagements. 

What’s next?

With regular institutional dialogue remaining the most pressing and complex issue on the OEWG’s agenda, the coming two months will require heavy lifting from the Chair and his secretariat. In March and April, the Chair will reflect on thematic groups, and prepare a revised set of modalities. This will be followed by a town hall meeting to discuss these modalities. The Chair will also reflection on modalities for stakeholder participation, followed by a separate town hall on this.

The zero draft of the Final Report will be made available in May, after which one or more virtual town hall meetings to discuss it will be held. The OEWG is expected to adopt its Final Report at its eleventh substantive meeting in July.

We used our DiploAI system to generate reports and transcripts from the session. Browse them on the dedicated page.

Interested in more OEWG? Visit our dedicated page:

un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.
un meeting 2022
UN Open-ended Working Group (OEWG)
This page provides detailed and real-time coverage on cybersecurity, peace and security negotiations at UN Open-Ended Working Group (OEWG) on security of and in the use of information and communications technologies 2021–2025.

Basketball spirit through cutting-edge technology: What did the NBA Tech Summit deliver?

On Valentine’s Day in San Francisco, the NBA Tech Summit took place ahead of the NBA All-Star weekend, showcasing the latest trends in sports, media, and technology. With the help of NVIDIA CEO Jensen Huang and legendary Golden State Warriors coach Steve Kerr, the audience was introduced to the evolution of event broadcasting, companies set to make significant investments in the coming years, and the future of basketball as a sport.

The panels also included renowned basketball experts, media figures, and former NBA players. A common consensus emerged: robotics and AI will reshape the sport as we know and significantly help athletes achieve far better results than ever before.

However, this is not exactly a novelty, as many innovations were presented ahead of the Paris Olympics, where certain programmes helped analysts and audiences follow their favourite events in greater detail.

The future of the NBA and the role of fans during matches

The same idea applies to the NBA, particularly with the integration of augmented reality (AR) and a feature called ‘Tabletop’, which allows the display of a virtual court with digital avatars tracking player movements in real time.

A feature like this one generated the most interest from the audience, as it enables viewers to watch matches from various angles, analyse performances in real-time, access interactive player data, and simulate alternative outcomes—essentially exploring how the game would have unfolded if different decisions had been made on the court.

An important aspect of these innovations is that fans have the opportunity to vote for competition participants, ask real-time questions, and take part in virtual events designed to keep them engaged during and after match broadcasts.

AI plays a crucial role in these systems, primarily by analysing strategies and performances, allowing coaches and players to make better-informed decisions in key moments of the game.

Player health as a priority

With a packed schedule of matches, additional tournaments, and extensive travel, professional basketball players face daily physical challenges. To help preserve their health, new technologies aim to minimise potential injuries.

Wearable health-tracking sensors embedded in equipment to collect data on physical parameters were introduced at the NBA Summit. This technology provides medical teams with real-time insights into players’ conditions, helping prevent potential injuries.

Draymond Green with AI Robot
Basketball spirit through cutting-edge technology: What did the NBA Tech Summit deliver? 40

Biometric sensors, motion-analysis accelerometers, injury-prevention algorithms, dehydration and fatigue tracking, and shoe sensors for load analysis are just some of the innovations in this field.

Ultra cameras, ultra broadcasts, ultra experience

For fans of high-resolution and interactive matches, the latest technological advancements offer new viewing experiences. While still in the final development stages, fans can already enjoy Ultra HD 8K and 360-degree cameras, along with the highly anticipated ‘player cam’ perspective, which allows for close-up tracking of individual players.

It is also possible to independently control the camera during matches, offering a complete view of the court and arena from every possible angle. Additionally, matches can be broadcast as holograms, providing a new dimension in 3D space on specialised platforms.

The integration of 5G technology ensures faster and more stable transmissions, enabling high-resolution streaming without delays.

Fewer mistakes, less stress

Refereeing mistakes have always been part of the sport, influencing match outcomes and shaping the history of one of the world’s most popular games. In response, the NBA has sought to minimise errors through Hawk-Eye technology for ball and boundary tracking.

A multi-camera system monitors the ball to determine whether it has crossed the line, touched the boundary, or shot on time. AI also analyses player contact in real time, suggesting potential fouls for referees to review.

Beyond these features, the NBA now operates a centralised Replay Centre, offering detailed analysis of controversial situations where AI plays a crucial role in providing recommendations for quicker decision-making. Additional innovations include hoop sensors, audio analysis for simulating unsportsmanlike fouls, and more.

Environmental sustainability and awareness

As an organisation reliant on cutting-edge technology, the NBA is also focused on environmental awareness, which is increasingly becoming a key aspect of the league. Modern arenas utilise solar energy, energy-efficient lighting, and water recycling systems, reducing electricity consumption and waste.

Digital tickets and contactless payments contribute to sustainability efforts, particularly through apps that enable quicker and more eco-friendly entry to arenas and access to various services.

Partnerships with environmental organisations are a crucial part of the NBA’s sustainability initiatives, with collaborations including the Green Sports Alliance and the NRDC. These efforts aim to reduce the environmental impact of events while enhancing the fan experience.

For basketball fans (and followers of other sports adopting similar advancements), the most important takeaway is that the fundamental rules and essence of the game will remain unchanged. Despite the inevitable technological progress, the core spirit of basketball, established in Springfield in 1891, will continue to be preserved.