Donald Trump revealed during the Pittsburgh, Pennsylvania Energy and Innovation Summit that the US will receive over $100 billion in investments to drive its AI economy and energy infrastructure.
The funding is set to create tens of thousands of jobs across the energy and AI sectors, with Pennsylvania positioned as a central hub.
Trump stated the US is already ‘way ahead of China’ in AI development, adding that staying in the lead will require expanding power production.
Instead of relying solely on renewables, Trump highlighted ‘clean, beautiful coal, oil, and nuclear energy as key pillars supporting AI-related growth.
Westinghouse plans to build several nuclear plants nationwide, while Knighthead Capital will invest $15 billion in North America’s largest natural gas power plant in Homer City, Pennsylvania.
Additionally, Google will revitalise two hydropower facilities within the state, contributing to the broader investment wave. Trump mentioned that 20 major technology and energy firms are preparing further commitments in Pennsylvania, reinforcing its role in what he calls the US ‘AI economy’.
The event, hosted by Senator Dave McCormick at Carnegie Mellon University, also featured discussions with Pennsylvania Governor Josh Shapiro.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
San Francisco has equipped almost 30,000 city employees, from social workers and healthcare staff to administrators, with Microsoft 365 Copilot Chat. The large-scale rollout followed a six-month pilot where workers gained up to five extra hours a week handling routine tasks, particularly in 311 service lines.
Copilot Chat helps streamline bureaucratic functions, such as drafting documents, translating over 40 languages, summarising lengthy reports, and analysing data. The goal is to free staff to focus more on serving residents directly.
A comprehensive five-week training scheme, supported by InnovateUS, ensures that employees learn to use AI securely and responsibly. This includes best practices for data protection, transparent disclosure of AI-generated content, and thorough fact-checking procedures.
City leadership emphasises that all AI tools run on a secure government cloud and adhere to robust guidelines. Employees must reveal when AI is used and remain accountable for its output. The city also plans future AI deployments in traffic management, permitting, and connecting homeless individuals with support services.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Meta has unveiled a £12 million audio research lab in Cambridge’s Ox‑Cam corridor, aimed at enhancing immersive sound for its Ray‑Ban Meta and upcoming Oakley Meta glasses. The facility includes advanced acoustic testing environments, motion‑tracked living spaces, and one of the world’s largest configurable reverberation chambers, enabling engineers to fine‑tune spatial audio through real‑world scenarios.
Designed to filter noise, focus on speech, and respond to head movement, the lab is developing adaptive audio intelligent enough to improve clarity in settings like busy streets or on public transport. Meta plans to integrate these features into its next generation of AR eyewear.
Officials say the lab represents a long‑term investment in UK engineering talent and bolsters the Oxford‑to‑Cambridge tech corridor. Meta’s global affairs lead and the Chancellor emphasised the significance of the investment, supported by a national £22 billion R&D strategy. This marks Meta’s largest overseas engineering base and reinforces its ambition to lead the global AI glasses market.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.
Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.
When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.
Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.
The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.
The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.
The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
One of the world’s most iconic sporting events — and certainly the pinnacle of professional tennis — came to a close on Sunday, as Jannik Sinner lifted his first Wimbledon trophy and Iga Świątek triumphed in the women’s singles.
The leap into the future, however, came at a cost. System failures sparked considerable controversy both during the tournament and in its aftermath.
Beyond technical faults, the move disrupted one of Wimbledon’s oldest traditions — for the first time in 138 years, AI performed the role of line judge entirely. Several players have since pointed the finger not just at the machines, but directly at those who put them in charge.
Wimbledon as the turning point for AI in sport
The 2025 edition of Wimbledon introduced a radical shift: all line calls were entrusted exclusively to the Hawk-Eye Live system, eliminating the on-court officials. The sight of a human line judge, once integral to the rhythm and theatre of Grand Slam tennis, was replaced by automated sensors and disembodied voices.
Rather than a triumph of innovation, the tournament became a cautionary tale.
During the second round, Britain’s Sonay Kartal faced Anastasia Pavlyuchenkova in a match that became the focal point of AI criticism. Multiple points were misjudged due to a system error requiring manual intervention mid-match. Kartal was visibly unsettled; Pavlyuchenkova even more so. ‘They stole the game from me,’ she said — a statement aimed not at her opponent but the organisers.
Further problems emerged across the draw. The system’s imperfections were increasingly evident from Taylor Fritz’s quarterfinal, where a serve was wrongly ruled out, to delayed audio cues.
Athletes speak out when technology silences the human
Discontent was not confined to a few isolated voices. Across locker rooms and at press conferences, players voiced concerns about specific decisions and the underlying principle.
Kartal later said she felt ‘undone by silence’ — referring to the machine’s failure and the absence of any human presence. Emma Raducanu and Jack Draper raised similar concerns, describing the system as ‘opaque’ and ‘alienating’. Without the option to challenge or review a call, athletes felt disempowered.
Former line judge Pauline Eyre described the transformation as ‘mechanical’, warning that AI cannot replicate the subtle understanding of rhythm and emotion inherent to human judgement. ‘Hawk-Eye doesn’t breathe. It doesn’t feel pressure. That used to be part of the game,’ she noted.
Although Wimbledon is built on tradition, the value of human oversight seems to have slipped away.
Other sports, same problem: When AI misses the mark
Wimbledon’s situation is far from unique. In various sports, AI and automated systems have repeatedly demonstrated their limitations.
In the 2020 Premier League, goal-line technology failed during a match between Aston Villa and Sheffield United, overlooking a clear goal — an error that shaped the season’s outcome.
Irish hurling suffered a similar breakdown in 2013, when the Hawk-Eye system wrongly cancelled a valid point during an All-Ireland semi-final, prompting a public apology and a temporary suspension of the technology.
Even tennis has a history of scepticism towards Hawk-Eye. Players like Rafael Nadal and Andy Murray questioned line calls, with replay footage often proving them right.
Patterns begin to emerge. Minor AI malfunctions in high-stakes settings can lead to outsized consequences. Even more damaging is the perception that the technology is beyond reproach.
From umpire to overseer: When AI watches everything
The events at Wimbledon reflect a broader trend, one seen during the Paris 2024 Olympics. As outlined in our earlier analysis of the Olympic AI agenda, AI was used extensively in scoring and judging, crowd monitoring, behavioural analytics, and predictive risk assessment.
Rather than simply officiating, AI has taken on a supervisory role: watching, analysing, interpreting — but offering little to no explanation.
Vital questions arise as the boundary between sports technology and digital governance fades. Who defines suspicious movement? What triggers an alert? Just like with Hawk-Eye rulings, the decisions are numerous, silent, and largely unaccountable.
Traditionally, sport has relied on visible judgement and clear rule enforcement. AI introduces opacity and detachment, making it difficult to understand how and why decisions are made.
The AI paradox: Trust without understanding
The more sophisticated AI becomes, the less people seem to understand it. The so-called black box effect — where outputs are accepted without clarity on inputs — now exists across society, from medicine to finance. Sport is no exception.
At Wimbledon, players were not simply objecting to incorrect calls. They were reacting to a system that offered no explanation, human feedback, or room for dialogue. In previous tournaments, athletes could appeal or contest a decision. In 2025, they were left facing a blinking light and a pre-recorded announcement.
Such experiences highlight a growing paradox. As trust in AI increases, scrutiny declines, often precisely because people cannot question it.
That trust comes at a price. In sport, it can mean irreversible moments. In public life, it risks producing systems that are beyond challenge. Even the most accurate machine, if left unchecked, may render the human experience obsolete.
Dependency over judgement and the cost of trusting machines
The promise of AI lies in precision. But precision, when removed from context and human judgement, becomes fragile.
What Wimbledon exposed was not a failure in design, but a lapse in restraint — a human tendency to over-delegate. Players faced decisions without recourse, coaches adapted to algorithmic expectations, and fans were left outside the decision-making loop.
Whether AI can be accurate is no longer a question. It often is. The danger arises when accuracy is mistaken for objectivity — when the tool becomes the ultimate authority.
Sport has always embraced uncertainty: the unexpected volley, the marginal call, the human error. Strip that away, and something vital is lost.
A hybrid model — where AI supports but does not dictate — may help preserve fairness and trust.
Let AI enhance the game. Let humans keep it human.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Claude AI has introduced integration with Canva, enabling users to generate and manage design content using simple text prompts. The new feature allows paid users to create presentations, edit visuals, and explore templates directly within Claude’s chat interface.
Alongside Canva, Claude now supports additional connectors like Notion, Stripe, and desktop apps like Figma and Prisma, expanding its ability to fetch and process data contextually. These integrations are powered by the open-source Model Context Protocol (MCP).
Canva’s head of ecosystem highlighted that users can now generate, summarise, and publish designs in one continuous workflow within Claude. The move represents another step toward AI-first productivity, removing the need for manual app-switching during the creative process.
Claude is the first AI assistant to enable Canva workflows through MCP, following recent partnerships with tools like Figma. A new integrations directory has also launched, helping users discover compatible apps for both web and desktop experiences.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Oracle has confirmed a $3 billion investment in its AI and cloud infrastructure across Germany and the Netherlands over the next five years. The move aims to boost its capacity in Europe as demand for advanced computing services continues to rise.
The company plans to invest $2 billion in Germany and $1 billion in the Netherlands, joining other major tech firms ramping up data centre infrastructure. Oracle’s strategy reflects broader market trends, with companies like Meta and Amazon committing large sums to meet AI-driven cloud needs.
The firm expects capital expenditure to exceed $25 billion in fiscal 2026, primarily focused on expanding data centre capabilities for AI. Analysts say Oracle’s AI and cloud services are increasingly competitive with traditional software, fuelling its strong performance this year.
Oracle shares have climbed nearly 38% since January, with a recent regulatory filing revealing a future deal worth over $30 billion in annual revenue beginning in 2028. The company sees its growing infrastructure as key to accelerating revenue and profit.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
President Donald Trump has announced a $70 billion initiative to strengthen America’s energy and data infrastructure to meet growing AI-driven demand. The plan was revealed at Pittsburgh’s Pennsylvania Energy & Innovation Summit, with over 60 primary energy and tech CEOs in attendance.
The investment will prioritise US states such as Pennsylvania, Texas, and Georgia, where energy grids are increasingly under pressure due to rising data centre usage. Part of the funding will come from federal-private partnerships, alongside potential reforms led by the Department of Energy.
Analysts suggest the plan redirect federal support away from wind and solar energy in favour of nuclear and fossil fuel development. The proposal may also scale back green tax credits introduced under the Inflation Reduction Act, potentially affecting more than 300 gigawatts of renewable capacity.
The package includes a project to transform a disused steel mill in Aliquippa into a large-scale data centre hub, forming part of a broader strategy to establish new AI-energy corridors. Critics argue the plan could prioritise legacy systems over decarbonisation, even as AI pushes infrastructure to its limits.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.
Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.
AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.
Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.
Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.
AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.
Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.
Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.
The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.
Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.
As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
While Gemini often dominates attention in Google’s AI portfolio, other innovative tools deserve the spotlight. One standout is NotebookLM, a virtual research assistant that helps users organise and interact with complex information across various subjects.
NotebookLM creates structured notebooks from curated materials, allowing meaningful engagement with the content. It supports dynamic features, including summaries and transformation options like Audio Overview, making research tasks more intuitive and efficient.
According to Google, featured notebooks are built using information from respected authors, academic institutions, and trusted nonprofits. Current topics include Shakespeare, Yellowstone National Park and more, offering a wide spectrum of well-sourced material.
Featured notebooks function just like regular ones, with added editorial quality. Users can navigate, explore, and repurpose content in ways that support individual learning and project needs. Google has confirmed the collection will grow over time.
NotebookLM remains in early development, yet the tool already shows potential for transforming everyday research tasks. Google also plans tighter integration with its other productivity tools, including Docs and Slides.
The tool significantly reduces the effort traditionally required for academic or creative research. Structured data presentation, combined with interactive features, makes information easier to consume and act upon.
NotebookLM was initially released on desktop but is now also available as a mobile app. Users can download it via the Google Play Store to create notebooks, add content, and stay productive from anywhere.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!