Meta has unveiled plans to build a 5GW data centre in Louisiana, part of a significant expansion of its AI infrastructure. CEO Mark Zuckerberg said the Hyperion complex will cover an area nearly the size of Manhattan, with the first 1.5GW phase expected online in 2026.
The company is also constructing a 1GW cluster named Prometheus in US, Ohio, which combines Meta-owned infrastructure with leased systems. Both projects will use a mix of renewable and natural gas power, underlining Meta’s strategy to ramp up compute capacity rapidly.
Zuckerberg stated Meta would invest hundreds of billions of dollars into superintelligence development, supported by elite talent recruited from major rivals. He added that the new data centres would offer the highest compute-per-researcher in the industry.
Amidst growing demand, Meta recently sought $29 billion in financing and secured 1GW of renewable power. Yet the expansion has raised environmental concerns, with one data centre in Georgia reportedly consuming 10% of a county’s water supply.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
One of the world’s most iconic sporting events — and certainly the pinnacle of professional tennis — came to a close on Sunday, as Jannik Sinner lifted his first Wimbledon trophy and Iga Świątek triumphed in the women’s singles.
The leap into the future, however, came at a cost. System failures sparked considerable controversy both during the tournament and in its aftermath.
Beyond technical faults, the move disrupted one of Wimbledon’s oldest traditions — for the first time in 138 years, AI performed the role of line judge entirely. Several players have since pointed the finger not just at the machines, but directly at those who put them in charge.
Wimbledon as the turning point for AI in sport
The 2025 edition of Wimbledon introduced a radical shift: all line calls were entrusted exclusively to the Hawk-Eye Live system, eliminating the on-court officials. The sight of a human line judge, once integral to the rhythm and theatre of Grand Slam tennis, was replaced by automated sensors and disembodied voices.
Rather than a triumph of innovation, the tournament became a cautionary tale.
During the second round, Britain’s Sonay Kartal faced Anastasia Pavlyuchenkova in a match that became the focal point of AI criticism. Multiple points were misjudged due to a system error requiring manual intervention mid-match. Kartal was visibly unsettled; Pavlyuchenkova even more so. ‘They stole the game from me,’ she said — a statement aimed not at her opponent but the organisers.
Further problems emerged across the draw. The system’s imperfections were increasingly evident from Taylor Fritz’s quarterfinal, where a serve was wrongly ruled out, to delayed audio cues.
Athletes speak out when technology silences the human
Discontent was not confined to a few isolated voices. Across locker rooms and at press conferences, players voiced concerns about specific decisions and the underlying principle.
Kartal later said she felt ‘undone by silence’ — referring to the machine’s failure and the absence of any human presence. Emma Raducanu and Jack Draper raised similar concerns, describing the system as ‘opaque’ and ‘alienating’. Without the option to challenge or review a call, athletes felt disempowered.
Former line judge Pauline Eyre described the transformation as ‘mechanical’, warning that AI cannot replicate the subtle understanding of rhythm and emotion inherent to human judgement. ‘Hawk-Eye doesn’t breathe. It doesn’t feel pressure. That used to be part of the game,’ she noted.
Although Wimbledon is built on tradition, the value of human oversight seems to have slipped away.
Other sports, same problem: When AI misses the mark
Wimbledon’s situation is far from unique. In various sports, AI and automated systems have repeatedly demonstrated their limitations.
In the 2020 Premier League, goal-line technology failed during a match between Aston Villa and Sheffield United, overlooking a clear goal — an error that shaped the season’s outcome.
Irish hurling suffered a similar breakdown in 2013, when the Hawk-Eye system wrongly cancelled a valid point during an All-Ireland semi-final, prompting a public apology and a temporary suspension of the technology.
Even tennis has a history of scepticism towards Hawk-Eye. Players like Rafael Nadal and Andy Murray questioned line calls, with replay footage often proving them right.
Patterns begin to emerge. Minor AI malfunctions in high-stakes settings can lead to outsized consequences. Even more damaging is the perception that the technology is beyond reproach.
From umpire to overseer: When AI watches everything
The events at Wimbledon reflect a broader trend, one seen during the Paris 2024 Olympics. As outlined in our earlier analysis of the Olympic AI agenda, AI was used extensively in scoring and judging, crowd monitoring, behavioural analytics, and predictive risk assessment.
Rather than simply officiating, AI has taken on a supervisory role: watching, analysing, interpreting — but offering little to no explanation.
Vital questions arise as the boundary between sports technology and digital governance fades. Who defines suspicious movement? What triggers an alert? Just like with Hawk-Eye rulings, the decisions are numerous, silent, and largely unaccountable.
Traditionally, sport has relied on visible judgement and clear rule enforcement. AI introduces opacity and detachment, making it difficult to understand how and why decisions are made.
The AI paradox: Trust without understanding
The more sophisticated AI becomes, the less people seem to understand it. The so-called black box effect — where outputs are accepted without clarity on inputs — now exists across society, from medicine to finance. Sport is no exception.
At Wimbledon, players were not simply objecting to incorrect calls. They were reacting to a system that offered no explanation, human feedback, or room for dialogue. In previous tournaments, athletes could appeal or contest a decision. In 2025, they were left facing a blinking light and a pre-recorded announcement.
Such experiences highlight a growing paradox. As trust in AI increases, scrutiny declines, often precisely because people cannot question it.
That trust comes at a price. In sport, it can mean irreversible moments. In public life, it risks producing systems that are beyond challenge. Even the most accurate machine, if left unchecked, may render the human experience obsolete.
Dependency over judgement and the cost of trusting machines
The promise of AI lies in precision. But precision, when removed from context and human judgement, becomes fragile.
What Wimbledon exposed was not a failure in design, but a lapse in restraint — a human tendency to over-delegate. Players faced decisions without recourse, coaches adapted to algorithmic expectations, and fans were left outside the decision-making loop.
Whether AI can be accurate is no longer a question. It often is. The danger arises when accuracy is mistaken for objectivity — when the tool becomes the ultimate authority.
Sport has always embraced uncertainty: the unexpected volley, the marginal call, the human error. Strip that away, and something vital is lost.
A hybrid model — where AI supports but does not dictate — may help preserve fairness and trust.
Let AI enhance the game. Let humans keep it human.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Claude AI has introduced integration with Canva, enabling users to generate and manage design content using simple text prompts. The new feature allows paid users to create presentations, edit visuals, and explore templates directly within Claude’s chat interface.
Alongside Canva, Claude now supports additional connectors like Notion, Stripe, and desktop apps like Figma and Prisma, expanding its ability to fetch and process data contextually. These integrations are powered by the open-source Model Context Protocol (MCP).
Canva’s head of ecosystem highlighted that users can now generate, summarise, and publish designs in one continuous workflow within Claude. The move represents another step toward AI-first productivity, removing the need for manual app-switching during the creative process.
Claude is the first AI assistant to enable Canva workflows through MCP, following recent partnerships with tools like Figma. A new integrations directory has also launched, helping users discover compatible apps for both web and desktop experiences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.
NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.
According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.
Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.
NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.
The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.
NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Eurosky seeks to build European infrastructure for social media platforms and promote digital sovereignty. The goal is to ensure that the continent’s digital space is governed by European laws, values, and rules, rather than being subject to the influence of foreign companies or governments.
To support this goal, Eurosky plans to implement a decentralised content moderation system, modelled after the approach used by the Bluesky network.
Moderation, essential for removing harmful or illegal content like child exploitation or stolen data, remains a significant obstacle for new platforms. Eurosky offers a non-profit moderation service to help emerging social media providers handle this task, thus lowering the barriers to entering the market.
The project enjoys strong public and political backing. Polls show that majorities in France, Germany, and Spain prefer Europe-based platforms, with only 5% favouring US providers.
Eurosky also has support from four European governments, though their identities remain undisclosed. This momentum aligns with a broader shift in user behaviour, as Europeans increasingly turn to local tech services amid privacy and sovereignty concerns.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
European policymakers are turning to quantum technology as a strategic solution to the continent’s growing economic and security challenges.
With the US and China surging ahead in AI, Europe sees quantum innovation as a last-mover advantage it cannot afford to miss.
Quantum computers, sensors, and encryption are already transforming military, industrial and cybersecurity capabilities.
From stealth detection to next-generation batteries, Europe hopes quantum breakthroughs will bolster its defences and revitalise its energy, automotive and pharmaceutical sectors.
Although EU institutions have heavily invested in quantum programmes and Europe trains more engineers than anywhere else, funding gaps persist.
Private investment remains limited, pushing some of the continent’s most promising start-ups abroad in search of capital and scale.
The EU must pair its technical excellence with bold policy reforms to avoid falling behind. Strategic protections, high-risk R&D support and new alliances will be essential to turning scientific strength into global leadership.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Universities across Asia, notably in China, are slashing liberal arts enrollments to expand STEM and AI programmes. Institutions like Fudan and Tsinghua are reducing intake for humanities subjects, as policymakers push for a high-tech workforce.
Despite this shift, educators argue that sidelining subjects like history, philosophy, and ethics threatens the cultivation of critical thinking, moral insight, and cultural literacy, which are increasingly necessary in an AI-saturated world.
They contend that humanistic reasoning remains essential for navigating AI’s societal and ethical complexities.
Innovators are pushing for hybrid models of education. Humanities courses are evolving to incorporate AI-driven archival research, digital analysis, and data-informed argumentation, turning liberal arts into tools for interpreting technology, rather than resisting it.
Supporters emphasise that liberal arts students offer distinct advantages: they excel in communication, ethical judgement, storytelling and adaptability, capacities that machines lack. These soft skills are increasingly valued in workplaces that integrate AI.
Analysts predict that the future lies not in abandoning the humanities but in transforming them. When taught alongside technical disciplines, through STEAM initiatives and cross-disciplinary curricula, liberal arts can complement AI, ensuring that technology remains anchored in human values.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nvidia’s CEO, Jensen Huang, has downplayed concerns over Chinese military use of American AI technology, stating it is improbable that China would risk relying on US-made chips.
He noted the potential liabilities of using foreign tech, which could deter its adoption by the country’s armed forces.
In an interview on CNN’s Fareed Zakaria GPS, Huang responded to Washington’s growing export controls targeting advanced AI hardware sales to China.
He suggested the military would likely avoid US technology to reduce exposure to geopolitical risks and sanctions.
The Biden administration had tightened restrictions on AI chip exports, citing national security and fears that cutting-edge processors might boost China’s military capabilities.
Nvidia, whose chips are central to global AI development, has seen its access to the Chinese market increasingly limited under these rules.
While Nvidia remains a key supplier in the AI sector, Huang’s comments may ease some political pressure around the company’s overseas operations.
The broader debate continues over balancing innovation, commercial interest and national security in the AI age.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Europe is facing a growing wave of AI-powered fake news and coordinated bot attacks that overwhelm media, fact-checkers, and online platforms instead of relying on older propaganda methods.
According to the European Policy Centre, networks using advanced AI now spread deepfakes, hoaxes, and fake articles faster than they can be debunked, raising concerns over whether EU rules are keeping up.
Since late 2024, the so-called ‘Overload’ operation has doubled its activity, sending an average of 2.6 fabricated proposals each day while also deploying thousands of bot accounts and fake videos.
These efforts aim to disrupt public debate through election intimidation, discrediting individuals, and creating panic instead of open discussion. Experts warn that without stricter enforcement, the EU’s Digital Services Act risks becoming ineffective.
To address the problem, analysts suggest that Europe must invest in real-time threat sharing between platforms, scalable AI detection systems, and narrative literacy campaigns to help citizens recognise manipulative content instead of depending only on fact-checkers.
Publicly naming and penalising non-compliant platforms would give the Digital Services Act more weight.
The European Parliament has already acknowledged widespread foreign-backed disinformation and cyberattacks targeting EU countries. Analysts say stronger action is required to protect the information space from systematic manipulation instead of allowing hostile narratives to spread unchecked.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has acquired California-based startup PlayAI to strengthen its position in AI voice technology. PlayAI specialises in replicating human-like voices, offering Meta a route to enhance conversational AI features instead of relying solely on text-based systems.
According to reports, the PlayAI team will join Meta next week.
Although financial terms have not been disclosed, industry sources suggest the deal is worth tens of millions. Meta aims to use PlayAI’s expertise across its platforms, from social media apps to devices like Ray-Ban smart glasses.
The move is part of Meta’s push to keep pace with competitors like Google and OpenAI in the generative AI race.
Talent acquisition plays a key role in the strategy. By absorbing smaller, specialised teams like PlayAI’s, Meta focuses on integrating technology and expert staff instead of developing every capability in-house.
The PlayAI team will report directly to Meta’s AI leadership, underscoring the company’s focus on voice-driven interactions and metaverse experiences.
Bringing PlayAI’s voice replication tools into Meta’s ecosystem could lead to more realistic AI assistants and new creator tools for platforms like Instagram and Facebook.
However, the expansion of voice cloning raises ethical and privacy concerns that Meta must manage carefully, instead of risking user trust.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!