Meta unveils 5GW AI data centre plans

Meta has unveiled plans to build a 5GW data centre in Louisiana, part of a significant expansion of its AI infrastructure. CEO Mark Zuckerberg said the Hyperion complex will cover an area nearly the size of Manhattan, with the first 1.5GW phase expected online in 2026.

The company is also constructing a 1GW cluster named Prometheus in US, Ohio, which combines Meta-owned infrastructure with leased systems. Both projects will use a mix of renewable and natural gas power, underlining Meta’s strategy to ramp up compute capacity rapidly.

Zuckerberg stated Meta would invest hundreds of billions of dollars into superintelligence development, supported by elite talent recruited from major rivals. He added that the new data centres would offer the highest compute-per-researcher in the industry.

Amidst growing demand, Meta recently sought $29 billion in financing and secured 1GW of renewable power. Yet the expansion has raised environmental concerns, with one data centre in Georgia reportedly consuming 10% of a county’s water supply.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI economist shares four key skills for kids in AI era

As AI reshapes jobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to help them adapt and thrive.

Instead of relying solely on technology, he believes critical thinking, adaptability, emotional intelligence, and financial numeracy will remain essential.

Chatterji highlighted these skills during an episode of the OpenAI podcast, saying critical thinking helps children spot problems rather than follow instructions. Given constant changes in AI, climate, and geopolitics, he stressed adaptability as another priority.

Rather than expecting children to master coding alone, Chatterji argues that emotional intelligence will make humans valuable partners alongside AI.

The fourth skill he emphasises is financial numeracy, including understanding maths without calculators and maintaining writing skills even with dictation software available. Instead of predicting specific future job titles, Chatterji believes focusing on these abilities equips children for any outcome.

His approach reflects a broader trend among tech leaders, with others like Alexis Ohanian and Sam Altman also promoting AI literacy while valuing traditional skills such as reading, writing, and arithmetic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump unveils AI economy with $100 billion investment push

Donald Trump revealed during the Pittsburgh, Pennsylvania Energy and Innovation Summit that the US will receive over $100 billion in investments to drive its AI economy and energy infrastructure.

The funding is set to create tens of thousands of jobs across the energy and AI sectors, with Pennsylvania positioned as a central hub.

Trump stated the US is already ‘way ahead of China’ in AI development, adding that staying in the lead will require expanding power production.

Instead of relying solely on renewables, Trump highlighted ‘clean, beautiful coal, oil, and nuclear energy as key pillars supporting AI-related growth.

Westinghouse plans to build several nuclear plants nationwide, while Knighthead Capital will invest $15 billion in North America’s largest natural gas power plant in Homer City, Pennsylvania.

Additionally, Google will revitalise two hydropower facilities within the state, contributing to the broader investment wave. Trump mentioned that 20 major technology and energy firms are preparing further commitments in Pennsylvania, reinforcing its role in what he calls the US ‘AI economy’.

The event, hosted by Senator Dave McCormick at Carnegie Mellon University, also featured discussions with Pennsylvania Governor Josh Shapiro.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

San Francisco deploys AI assistant to 30,000 staff

San Francisco has equipped almost 30,000 city employees, from social workers and healthcare staff to administrators, with Microsoft 365 Copilot Chat. The large-scale rollout followed a six-month pilot where workers gained up to five extra hours a week handling routine tasks, particularly in 311 service lines.

Copilot Chat helps streamline bureaucratic functions, such as drafting documents, translating over 40 languages, summarising lengthy reports, and analysing data. The goal is to free staff to focus more on serving residents directly.

A comprehensive five-week training scheme, supported by InnovateUS, ensures that employees learn to use AI securely and responsibly. This includes best practices for data protection, transparent disclosure of AI-generated content, and thorough fact-checking procedures.

City leadership emphasises that all AI tools run on a secure government cloud and adhere to robust guidelines. Employees must reveal when AI is used and remain accountable for its output. The city also plans future AI deployments in traffic management, permitting, and connecting homeless individuals with support services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta opens audio lab to improve AI smart glasses

Meta has unveiled a £12 million audio research lab in Cambridge’s Ox‑Cam corridor, aimed at enhancing immersive sound for its Ray‑Ban Meta and upcoming Oakley Meta glasses. The facility includes advanced acoustic testing environments, motion‑tracked living spaces, and one of the world’s largest configurable reverberation chambers, enabling engineers to fine‑tune spatial audio through real‑world scenarios.

Designed to filter noise, focus on speech, and respond to head movement, the lab is developing adaptive audio intelligent enough to improve clarity in settings like busy streets or on public transport. Meta plans to integrate these features into its next generation of AR eyewear.

Officials say the lab represents a long‑term investment in UK engineering talent and bolsters the Oxford‑to‑Cambridge tech corridor. Meta’s global affairs lead and the Chancellor emphasised the significance of the investment, supported by a national £22 billion R&D strategy. This marks Meta’s largest overseas engineering base and reinforces its ambition to lead the global AI glasses market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Defence AI Centre at heart of Korean strategy

South Korea has unveiled a strategy to share extensive military data with defence firms to accelerate AI-powered weapon systems, inspired by US military cloud initiatives. Plans include a national public–private fund to finance innovation and bolster the country’s defence tech prowess.

A specialised working group of around 30 experts, including participants from the Defence Acquisition Program Administration, is drafting standards for safety and reliability in AI weapon systems. Their work aims to lay the foundations for the responsible integration of AI into defence hardware.

Officials highlight the need to merge classified military databases into a consolidated defence cloud, moving away from siloed systems. This model follows the tiered cloud framework adopted by the US, enabling more agile collaboration between the military and industry.

South Korea is also fast-tracking development across core defence domains, such as autonomous drones, command-and-control systems, AI-enabled surveillance, and cyber operations. These efforts are underpinned by the recently established Defence AI Centre, positioning the country at the forefront of Asia’s military AI race.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

No judges, no appeals, no fairness: Wimbledon 2025 shows what happens when AI takes over

One of the world’s most iconic sporting events — and certainly the pinnacle of professional tennis — came to a close on Sunday, as Jannik Sinner lifted his first Wimbledon trophy and Iga Świątek triumphed in the women’s singles.

While the two new champions will remember this tournament for a lifetime, Wimbledon 2025 will also be recalled for another reason: the organisers’ decision to hand over crucial match decisions to AI-powered systems.

The leap into the future, however, came at a cost. System failures sparked considerable controversy both during the tournament and in its aftermath.

Beyond technical faults, the move disrupted one of Wimbledon’s oldest traditions — for the first time in 138 years, AI performed the role of line judge entirely. Several players have since pointed the finger not just at the machines, but directly at those who put them in charge.

Wimbledon

Wimbledon as the turning point for AI in sport

The 2025 edition of Wimbledon introduced a radical shift: all line calls were entrusted exclusively to the Hawk-Eye Live system, eliminating the on-court officials. The sight of a human line judge, once integral to the rhythm and theatre of Grand Slam tennis, was replaced by automated sensors and disembodied voices.

Rather than a triumph of innovation, the tournament became a cautionary tale.

During the second round, Britain’s Sonay Kartal faced Anastasia Pavlyuchenkova in a match that became the focal point of AI criticism. Multiple points were misjudged due to a system error requiring manual intervention mid-match. Kartal was visibly unsettled; Pavlyuchenkova even more so. ‘They stole the game from me,’ she said — a statement aimed not at her opponent but the organisers.

Further problems emerged across the draw. The system’s imperfections were increasingly evident from Taylor Fritz’s quarterfinal, where a serve was wrongly ruled out, to delayed audio cues.

Athletes speak out when technology silences the human

Discontent was not confined to a few isolated voices. Across locker rooms and at press conferences, players voiced concerns about specific decisions and the underlying principle.

Kartal later said she felt ‘undone by silence’ — referring to the machine’s failure and the absence of any human presence. Emma Raducanu and Jack Draper raised similar concerns, describing the system as ‘opaque’ and ‘alienating’. Without the option to challenge or review a call, athletes felt disempowered.

Former line judge Pauline Eyre described the transformation as ‘mechanical’, warning that AI cannot replicate the subtle understanding of rhythm and emotion inherent to human judgement. ‘Hawk-Eye doesn’t breathe. It doesn’t feel pressure. That used to be part of the game,’ she noted.

Although Wimbledon is built on tradition, the value of human oversight seems to have slipped away.

Other sports, same problem: When AI misses the mark

Wimbledon’s situation is far from unique. In various sports, AI and automated systems have repeatedly demonstrated their limitations.

In the 2020 Premier League, goal-line technology failed during a match between Aston Villa and Sheffield United, overlooking a clear goal — an error that shaped the season’s outcome.

Irish hurling suffered a similar breakdown in 2013, when the Hawk-Eye system wrongly cancelled a valid point during an All-Ireland semi-final, prompting a public apology and a temporary suspension of the technology.

Even tennis has a history of scepticism towards Hawk-Eye. Players like Rafael Nadal and Andy Murray questioned line calls, with replay footage often proving them right.

Patterns begin to emerge. Minor AI malfunctions in high-stakes settings can lead to outsized consequences. Even more damaging is the perception that the technology is beyond reproach.

From umpire to overseer: When AI watches everything

The events at Wimbledon reflect a broader trend, one seen during the Paris 2024 Olympics. As outlined in our earlier analysis of the Olympic AI agenda, AI was used extensively in scoring and judging, crowd monitoring, behavioural analytics, and predictive risk assessment.

Rather than simply officiating, AI has taken on a supervisory role: watching, analysing, interpreting — but offering little to no explanation.

Vital questions arise as the boundary between sports technology and digital governance fades. Who defines suspicious movement? What triggers an alert? Just like with Hawk-Eye rulings, the decisions are numerous, silent, and largely unaccountable.

Traditionally, sport has relied on visible judgement and clear rule enforcement. AI introduces opacity and detachment, making it difficult to understand how and why decisions are made.

The AI paradox: Trust without understanding

The more sophisticated AI becomes, the less people seem to understand it. The so-called black box effect — where outputs are accepted without clarity on inputs — now exists across society, from medicine to finance. Sport is no exception.

At Wimbledon, players were not simply objecting to incorrect calls. They were reacting to a system that offered no explanation, human feedback, or room for dialogue. In previous tournaments, athletes could appeal or contest a decision. In 2025, they were left facing a blinking light and a pre-recorded announcement.

Such experiences highlight a growing paradox. As trust in AI increases, scrutiny declines, often precisely because people cannot question it.

That trust comes at a price. In sport, it can mean irreversible moments. In public life, it risks producing systems that are beyond challenge. Even the most accurate machine, if left unchecked, may render the human experience obsolete.

Dependency over judgement and the cost of trusting machines

The promise of AI lies in precision. But precision, when removed from context and human judgement, becomes fragile.

What Wimbledon exposed was not a failure in design, but a lapse in restraint — a human tendency to over-delegate. Players faced decisions without recourse, coaches adapted to algorithmic expectations, and fans were left outside the decision-making loop.

Whether AI can be accurate is no longer a question. It often is. The danger arises when accuracy is mistaken for objectivity — when the tool becomes the ultimate authority.

Sport has always embraced uncertainty: the unexpected volley, the marginal call, the human error. Strip that away, and something vital is lost.

A hybrid model — where AI supports but does not dictate — may help preserve fairness and trust.

Let AI enhance the game. Let humans keep it human.

 Person, Clothing, Footwear, Shoe, Playing Tennis, Racket, Sport, Tennis, Tennis Racket

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Claude integrates Canva to power AI-first workflows

Claude AI has introduced integration with Canva, enabling users to generate and manage design content using simple text prompts. The new feature allows paid users to create presentations, edit visuals, and explore templates directly within Claude’s chat interface.

Alongside Canva, Claude now supports additional connectors like Notion, Stripe, and desktop apps like Figma and Prisma, expanding its ability to fetch and process data contextually. These integrations are powered by the open-source Model Context Protocol (MCP).

Canva’s head of ecosystem highlighted that users can now generate, summarise, and publish designs in one continuous workflow within Claude. The move represents another step toward AI-first productivity, removing the need for manual app-switching during the creative process.

Claude is the first AI assistant to enable Canva workflows through MCP, following recent partnerships with tools like Figma. A new integrations directory has also launched, helping users discover compatible apps for both web and desktop experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump launches $70 billion AI and energy investment plan

President Donald Trump has announced a $70 billion initiative to strengthen America’s energy and data infrastructure to meet growing AI-driven demand. The plan was revealed at Pittsburgh’s Pennsylvania Energy & Innovation Summit, with over 60 primary energy and tech CEOs in attendance.

The investment will prioritise US states such as Pennsylvania, Texas, and Georgia, where energy grids are increasingly under pressure due to rising data centre usage. Part of the funding will come from federal-private partnerships, alongside potential reforms led by the Department of Energy.

Analysts suggest the plan redirect federal support away from wind and solar energy in favour of nuclear and fossil fuel development. The proposal may also scale back green tax credits introduced under the Inflation Reduction Act, potentially affecting more than 300 gigawatts of renewable capacity.

The package includes a project to transform a disused steel mill in Aliquippa into a large-scale data centre hub, forming part of a broader strategy to establish new AI-energy corridors. Critics argue the plan could prioritise legacy systems over decarbonisation, even as AI pushes infrastructure to its limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!