Meta opens audio lab to improve AI smart glasses

Meta has unveiled a £12 million audio research lab in Cambridge’s Ox‑Cam corridor, aimed at enhancing immersive sound for its Ray‑Ban Meta and upcoming Oakley Meta glasses. The facility includes advanced acoustic testing environments, motion‑tracked living spaces, and one of the world’s largest configurable reverberation chambers, enabling engineers to fine‑tune spatial audio through real‑world scenarios.

Designed to filter noise, focus on speech, and respond to head movement, the lab is developing adaptive audio intelligent enough to improve clarity in settings like busy streets or on public transport. Meta plans to integrate these features into its next generation of AR eyewear.

Officials say the lab represents a long‑term investment in UK engineering talent and bolsters the Oxford‑to‑Cambridge tech corridor. Meta’s global affairs lead and the Chancellor emphasised the significance of the investment, supported by a national £22 billion R&D strategy. This marks Meta’s largest overseas engineering base and reinforces its ambition to lead the global AI glasses market.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

No judges, no appeals, no fairness: Wimbledon 2025 shows what happens when AI takes over

One of the world’s most iconic sporting events — and certainly the pinnacle of professional tennis — came to a close on Sunday, as Jannik Sinner lifted his first Wimbledon trophy and Iga Świątek triumphed in the women’s singles.

While the two new champions will remember this tournament for a lifetime, Wimbledon 2025 will also be recalled for another reason: the organisers’ decision to hand over crucial match decisions to AI-powered systems.

The leap into the future, however, came at a cost. System failures sparked considerable controversy both during the tournament and in its aftermath.

Beyond technical faults, the move disrupted one of Wimbledon’s oldest traditions — for the first time in 138 years, AI performed the role of line judge entirely. Several players have since pointed the finger not just at the machines, but directly at those who put them in charge.

Wimbledon

Wimbledon as the turning point for AI in sport

The 2025 edition of Wimbledon introduced a radical shift: all line calls were entrusted exclusively to the Hawk-Eye Live system, eliminating the on-court officials. The sight of a human line judge, once integral to the rhythm and theatre of Grand Slam tennis, was replaced by automated sensors and disembodied voices.

Rather than a triumph of innovation, the tournament became a cautionary tale.

During the second round, Britain’s Sonay Kartal faced Anastasia Pavlyuchenkova in a match that became the focal point of AI criticism. Multiple points were misjudged due to a system error requiring manual intervention mid-match. Kartal was visibly unsettled; Pavlyuchenkova even more so. ‘They stole the game from me,’ she said — a statement aimed not at her opponent but the organisers.

Further problems emerged across the draw. The system’s imperfections were increasingly evident from Taylor Fritz’s quarterfinal, where a serve was wrongly ruled out, to delayed audio cues.

Athletes speak out when technology silences the human

Discontent was not confined to a few isolated voices. Across locker rooms and at press conferences, players voiced concerns about specific decisions and the underlying principle.

Kartal later said she felt ‘undone by silence’ — referring to the machine’s failure and the absence of any human presence. Emma Raducanu and Jack Draper raised similar concerns, describing the system as ‘opaque’ and ‘alienating’. Without the option to challenge or review a call, athletes felt disempowered.

Former line judge Pauline Eyre described the transformation as ‘mechanical’, warning that AI cannot replicate the subtle understanding of rhythm and emotion inherent to human judgement. ‘Hawk-Eye doesn’t breathe. It doesn’t feel pressure. That used to be part of the game,’ she noted.

Although Wimbledon is built on tradition, the value of human oversight seems to have slipped away.

Other sports, same problem: When AI misses the mark

Wimbledon’s situation is far from unique. In various sports, AI and automated systems have repeatedly demonstrated their limitations.

In the 2020 Premier League, goal-line technology failed during a match between Aston Villa and Sheffield United, overlooking a clear goal — an error that shaped the season’s outcome.

Irish hurling suffered a similar breakdown in 2013, when the Hawk-Eye system wrongly cancelled a valid point during an All-Ireland semi-final, prompting a public apology and a temporary suspension of the technology.

Even tennis has a history of scepticism towards Hawk-Eye. Players like Rafael Nadal and Andy Murray questioned line calls, with replay footage often proving them right.

Patterns begin to emerge. Minor AI malfunctions in high-stakes settings can lead to outsized consequences. Even more damaging is the perception that the technology is beyond reproach.

From umpire to overseer: When AI watches everything

The events at Wimbledon reflect a broader trend, one seen during the Paris 2024 Olympics. As outlined in our earlier analysis of the Olympic AI agenda, AI was used extensively in scoring and judging, crowd monitoring, behavioural analytics, and predictive risk assessment.

Rather than simply officiating, AI has taken on a supervisory role: watching, analysing, interpreting — but offering little to no explanation.

Vital questions arise as the boundary between sports technology and digital governance fades. Who defines suspicious movement? What triggers an alert? Just like with Hawk-Eye rulings, the decisions are numerous, silent, and largely unaccountable.

Traditionally, sport has relied on visible judgement and clear rule enforcement. AI introduces opacity and detachment, making it difficult to understand how and why decisions are made.

The AI paradox: Trust without understanding

The more sophisticated AI becomes, the less people seem to understand it. The so-called black box effect — where outputs are accepted without clarity on inputs — now exists across society, from medicine to finance. Sport is no exception.

At Wimbledon, players were not simply objecting to incorrect calls. They were reacting to a system that offered no explanation, human feedback, or room for dialogue. In previous tournaments, athletes could appeal or contest a decision. In 2025, they were left facing a blinking light and a pre-recorded announcement.

Such experiences highlight a growing paradox. As trust in AI increases, scrutiny declines, often precisely because people cannot question it.

That trust comes at a price. In sport, it can mean irreversible moments. In public life, it risks producing systems that are beyond challenge. Even the most accurate machine, if left unchecked, may render the human experience obsolete.

Dependency over judgement and the cost of trusting machines

The promise of AI lies in precision. But precision, when removed from context and human judgement, becomes fragile.

What Wimbledon exposed was not a failure in design, but a lapse in restraint — a human tendency to over-delegate. Players faced decisions without recourse, coaches adapted to algorithmic expectations, and fans were left outside the decision-making loop.

Whether AI can be accurate is no longer a question. It often is. The danger arises when accuracy is mistaken for objectivity — when the tool becomes the ultimate authority.

Sport has always embraced uncertainty: the unexpected volley, the marginal call, the human error. Strip that away, and something vital is lost.

A hybrid model — where AI supports but does not dictate — may help preserve fairness and trust.

Let AI enhance the game. Let humans keep it human.

 Person, Clothing, Footwear, Shoe, Playing Tennis, Racket, Sport, Tennis, Tennis Racket

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Claude integrates Canva to power AI-first workflows

Claude AI has introduced integration with Canva, enabling users to generate and manage design content using simple text prompts. The new feature allows paid users to create presentations, edit visuals, and explore templates directly within Claude’s chat interface.

Alongside Canva, Claude now supports additional connectors like Notion, Stripe, and desktop apps like Figma and Prisma, expanding its ability to fetch and process data contextually. These integrations are powered by the open-source Model Context Protocol (MCP).

Canva’s head of ecosystem highlighted that users can now generate, summarise, and publish designs in one continuous workflow within Claude. The move represents another step toward AI-first productivity, removing the need for manual app-switching during the creative process.

Claude is the first AI assistant to enable Canva workflows through MCP, following recent partnerships with tools like Figma. A new integrations directory has also launched, helping users discover compatible apps for both web and desktop experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle commits billions to expand AI infrastructure in Europe

Oracle has confirmed a $3 billion investment in its AI and cloud infrastructure across Germany and the Netherlands over the next five years. The move aims to boost its capacity in Europe as demand for advanced computing services continues to rise.

The company plans to invest $2 billion in Germany and $1 billion in the Netherlands, joining other major tech firms ramping up data centre infrastructure. Oracle’s strategy reflects broader market trends, with companies like Meta and Amazon committing large sums to meet AI-driven cloud needs.

The firm expects capital expenditure to exceed $25 billion in fiscal 2026, primarily focused on expanding data centre capabilities for AI. Analysts say Oracle’s AI and cloud services are increasingly competitive with traditional software, fuelling its strong performance this year.

Oracle shares have climbed nearly 38% since January, with a recent regulatory filing revealing a future deal worth over $30 billion in annual revenue beginning in 2028. The company sees its growing infrastructure as key to accelerating revenue and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Trump launches $70 billion AI and energy investment plan

President Donald Trump has announced a $70 billion initiative to strengthen America’s energy and data infrastructure to meet growing AI-driven demand. The plan was revealed at Pittsburgh’s Pennsylvania Energy & Innovation Summit, with over 60 primary energy and tech CEOs in attendance.

The investment will prioritise US states such as Pennsylvania, Texas, and Georgia, where energy grids are increasingly under pressure due to rising data centre usage. Part of the funding will come from federal-private partnerships, alongside potential reforms led by the Department of Energy.

Analysts suggest the plan redirect federal support away from wind and solar energy in favour of nuclear and fossil fuel development. The proposal may also scale back green tax credits introduced under the Inflation Reduction Act, potentially affecting more than 300 gigawatts of renewable capacity.

The package includes a project to transform a disused steel mill in Aliquippa into a large-scale data centre hub, forming part of a broader strategy to establish new AI-energy corridors. Critics argue the plan could prioritise legacy systems over decarbonisation, even as AI pushes infrastructure to its limits.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google expands NotebookLM with curated content and mobile access

While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.

NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.

According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.

Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.

NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.

The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.

NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI-generated video is reshaping the film industry

AI-generated video has evolved at breakneck speed, moving from distorted and unconvincing clips to hyper-realistic creations that rival traditional filmmaking. What was once a blurry, awkward depiction of Will Smith eating spaghetti in 2023 is now flawlessly rendered on platforms like Google’s Veo 3.

In just months, tools such as Luma Labs’ Dream Machine, OpenAI’s Sora, and Runway AI’s Gen-4 have redefined what’s possible, drawing the attention of Hollywood studios, advertisers, and artists eager to test the limits of this new creative frontier.

Major industry players are already experimenting with AI for previsualisation, visual effects, and even entire animated films. Lionsgate and AMC Networks have partnered with Runway AI, with executives exploring AI-generated family-friendly versions of blockbuster franchises like John Wick and The Hunger Games.

The technology drastically cuts costs for complex scenes, making it possible to create elaborate previews—like a snowstorm filled with thousands of soldiers—for a fraction of the traditional price. However, while some see AI as a tool to expand creative possibilities, resistance remains strong.

Critics argue that AI threatens traditional artistic processes, raises ethical concerns over energy use and data training, and risks undermining human creativity. The debate mirrors past technological shifts in entertainment—inevitable yet disruptive.

As Runway and other pioneers push toward immersive experiences in augmented and virtual reality, the future of filmmaking may no longer be defined solely by Hollywood, but by anyone with access to these powerful tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Military AI and the void of accountability

In her blog post ‘Military AI: Operational dangers and the regulatory void,’ Julia Williams warns that AI is reshaping the battlefield, shifting from human-controlled systems to highly autonomous technologies that make life-and-death decisions. From the United States’ Project Maven to Israel’s AI-powered targeting in Gaza and Ukraine’s semi-autonomous drones, military AI is no longer a futuristic concept but a present reality.

While designed to improve precision and reduce risks, these systems carry hidden dangers—opaque ‘black box’ decisions, biases rooted in flawed data, and unpredictable behaviour in high-pressure situations. Operators either distrust AI or over-rely on it, sometimes without understanding how conclusions are reached, creating a new layer of risk in modern warfare.

Bias remains a critical challenge. AI can inherit societal prejudices from the data it is trained on, misinterpret patterns through algorithmic flaws, or encourage automation bias, where humans trust AI outputs even when they shouldn’t.

These flaws can have devastating consequences in military contexts, leading to wrongful targeting or escalation. Despite attempts to ensure ‘meaningful human control’ over autonomous weapons, the concept lacks clarity, allowing states and manufacturers to apply oversight unevenly. Responsibility for mistakes remains murky—should it lie with the operator, the developer, or the machine itself?

That uncertainty feeds into a growing global security crisis. Regulation lags far behind technological progress, with international forums disagreeing on how to govern military AI.

Meanwhile, an AI arms race accelerates between the US and China, driven by private-sector innovation and strategic rivalry. Export controls on semiconductors and key materials only deepen mistrust, while less technologically advanced nations fear both being left behind and becoming targets of AI warfare. The risk extends beyond states, as rogue actors and non-state groups could gain access to advanced systems, making conflicts harder to contain.

As Williams highlights, the growing use of military AI threatens to speed up the tempo of conflict and blur accountability. Without strong governance and global cooperation, it could escalate wars faster than humans can de-escalate them, shifting the battlefield from soldiers to civilian infrastructure and leaving humanity vulnerable to errors we may not survive.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!