Meta unveils 5GW AI data centre plans

Meta has unveiled plans to build a 5GW data centre in Louisiana, part of a significant expansion of its AI infrastructure. CEO Mark Zuckerberg said the Hyperion complex will cover an area nearly the size of Manhattan, with the first 1.5GW phase expected online in 2026.

The company is also constructing a 1GW cluster named Prometheus in US, Ohio, which combines Meta-owned infrastructure with leased systems. Both projects will use a mix of renewable and natural gas power, underlining Meta’s strategy to ramp up compute capacity rapidly.

Zuckerberg stated Meta would invest hundreds of billions of dollars into superintelligence development, supported by elite talent recruited from major rivals. He added that the new data centres would offer the highest compute-per-researcher in the industry.

Amidst growing demand, Meta recently sought $29 billion in financing and secured 1GW of renewable power. Yet the expansion has raised environmental concerns, with one data centre in Georgia reportedly consuming 10% of a county’s water supply.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI economist shares four key skills for kids in AI era

As AI reshapes jobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to help them adapt and thrive.

Instead of relying solely on technology, he believes critical thinking, adaptability, emotional intelligence, and financial numeracy will remain essential.

Chatterji highlighted these skills during an episode of the OpenAI podcast, saying critical thinking helps children spot problems rather than follow instructions. Given constant changes in AI, climate, and geopolitics, he stressed adaptability as another priority.

Rather than expecting children to master coding alone, Chatterji argues that emotional intelligence will make humans valuable partners alongside AI.

The fourth skill he emphasises is financial numeracy, including understanding maths without calculators and maintaining writing skills even with dictation software available. Instead of predicting specific future job titles, Chatterji believes focusing on these abilities equips children for any outcome.

His approach reflects a broader trend among tech leaders, with others like Alexis Ohanian and Sam Altman also promoting AI literacy while valuing traditional skills such as reading, writing, and arithmetic.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Women see AI as more harmful across life settings

Women are showing more scepticism than men when it comes to AI particularly regarding its ethics, fairness and transparency.

A national study from Georgetown University, Boston University and the University of Vermont found that women were more concerned about AI’s risks in decision-making. Concerns were especially prominent around AI tools used in the workplace, such as hiring platforms and performance review systems.

Bias may be introduced when such tools rely on historical data, which often underrepresents women and other marginalised groups. The study also found that gender influenced compliance with workplace rules surrounding AI use, especially in restrictive environments.

When AI use was banned, women were more likely to follow the rules than men. Usage jumped when tools were explicitly permitted. In cases where AI was allowed, over 80% of both women and men reported using the tools.

Women were generally more wary of AI’s impact across all areas of life — not just in the professional sphere. From personal settings to public life, survey respondents who identified as women consistently viewed AI as more harmful than beneficial.

The study, conducted via Qualtrics in August 2023, surveyed a representative US sample with a majority of female respondents. On average, participants were 45 years old, with over half identifying as women across different educational and professional backgrounds.

The research comes amid wider concerns in the AI field about ethics and accountability, often led by women researchers. High-profile cases include Google’s dismissal of Timnit Gebru and later Margaret Mitchell, both of whom raised ethical concerns about large language models.

The study’s authors concluded that building public trust in AI may require clearer policies and greater transparency in how systems are designed. They also highlighted the importance of increasing diversity among those developing AI tools to ensure more inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Claude integrates Canva to power AI-first workflows

Claude AI has introduced integration with Canva, enabling users to generate and manage design content using simple text prompts. The new feature allows paid users to create presentations, edit visuals, and explore templates directly within Claude’s chat interface.

Alongside Canva, Claude now supports additional connectors like Notion, Stripe, and desktop apps like Figma and Prisma, expanding its ability to fetch and process data contextually. These integrations are powered by the open-source Model Context Protocol (MCP).

Canva’s head of ecosystem highlighted that users can now generate, summarise, and publish designs in one continuous workflow within Claude. The move represents another step toward AI-first productivity, removing the need for manual app-switching during the creative process.

Claude is the first AI assistant to enable Canva workflows through MCP, following recent partnerships with tools like Figma. A new integrations directory has also launched, helping users discover compatible apps for both web and desktop experiences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Oracle commits billions to expand AI infrastructure in Europe

Oracle has confirmed a $3 billion investment in its AI and cloud infrastructure across Germany and the Netherlands over the next five years. The move aims to boost its capacity in Europe as demand for advanced computing services continues to rise.

The company plans to invest $2 billion in Germany and $1 billion in the Netherlands, joining other major tech firms ramping up data centre infrastructure. Oracle’s strategy reflects broader market trends, with companies like Meta and Amazon committing large sums to meet AI-driven cloud needs.

The firm expects capital expenditure to exceed $25 billion in fiscal 2026, primarily focused on expanding data centre capabilities for AI. Analysts say Oracle’s AI and cloud services are increasingly competitive with traditional software, fuelling its strong performance this year.

Oracle shares have climbed nearly 38% since January, with a recent regulatory filing revealing a future deal worth over $30 billion in annual revenue beginning in 2028. The company sees its growing infrastructure as key to accelerating revenue and profit.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Online health search grows, but scepticism about AI stays high

Trust in traditional healthcare providers remains high, but Americans are increasingly turning to AI for health information, according to new data from the Annenberg Public Policy Centre (APPC).

While 90% of adults trust their personal health provider, nearly 8 in 10 say they are likely to look online for answers to health-related questions. The rise of the internet gave the public access to government health authorities such as the CDC, FDA, and NIH.

Although trust in these institutions dipped during the Covid-19 pandemic, confidence remains relatively high at 66%–68%. Generative AI tools are now becoming a third key source of health information.

AI-generated summaries — such as Google’s ‘AI Overviews‘ or Bing’s ‘Copilot Answers’ — appear prominently in search results.

Despite disclaimers that responses may contain mistakes, nearly two-thirds (63%) of online health searchers find these responses somewhat or very reliable. Around 31% report often or always finding the answers they need in the summaries.

Public attitudes towards AI in clinical settings remain more cautious. Nearly half (49%) of US adults say they are not comfortable with providers using AI tools instead of their own experience. About 36% express some level of comfort, while 41% believe providers are already using AI at least occasionally.

AI use is growing, but most online health seekers continue exploring beyond the initial summary. Two-thirds follow links to websites such as Mayo Clinic, WebMD, or non-profit organisations like the American Heart Association. Federal resources such as the CDC and NIH are also consulted.

Younger users are more likely to recognise and interact with AI summaries. Among those aged 18 to 49, between 69% and 75% have seen AI-generated content in search results, compared to just 49% of users over 65.

Despite high smartphone ownership (93%), only 59% of users track their health with apps. Among these, 52% are likely to share data with a provider, although 36% say they would not. Most respondents (80%) welcome prescription alerts from pharmacies.

The survey, fielded in April 2025 among 1,653 US adults, highlights growing reliance on AI for health information but also reveals concerns about its use in professional medical decision-making. APPC experts urge greater transparency and caution, especially for vulnerable users who may not understand the limitations of AI-generated content.

Director Kathleen Hall Jamieson warns that confusing AI-generated summaries with professional guidance could cause harm. Analyst Laura A. Gibson adds that outdated information may persist in AI platforms, reinforcing the need for user scepticism.

As the public turns to digital health tools, researchers recommend clearer policies, increased transparency, and greater diversity in AI development to ensure safe and inclusive outcomes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

How AI-generated video is reshaping the film industry

AI-generated video has evolved at breakneck speed, moving from distorted and unconvincing clips to hyper-realistic creations that rival traditional filmmaking. What was once a blurry, awkward depiction of Will Smith eating spaghetti in 2023 is now flawlessly rendered on platforms like Google’s Veo 3.

In just months, tools such as Luma Labs’ Dream Machine, OpenAI’s Sora, and Runway AI’s Gen-4 have redefined what’s possible, drawing the attention of Hollywood studios, advertisers, and artists eager to test the limits of this new creative frontier.

Major industry players are already experimenting with AI for previsualisation, visual effects, and even entire animated films. Lionsgate and AMC Networks have partnered with Runway AI, with executives exploring AI-generated family-friendly versions of blockbuster franchises like John Wick and The Hunger Games.

The technology drastically cuts costs for complex scenes, making it possible to create elaborate previews—like a snowstorm filled with thousands of soldiers—for a fraction of the traditional price. However, while some see AI as a tool to expand creative possibilities, resistance remains strong.

Critics argue that AI threatens traditional artistic processes, raises ethical concerns over energy use and data training, and risks undermining human creativity. The debate mirrors past technological shifts in entertainment—inevitable yet disruptive.

As Runway and other pioneers push toward immersive experiences in augmented and virtual reality, the future of filmmaking may no longer be defined solely by Hollywood, but by anyone with access to these powerful tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Europe builds quantum computers with glass and light

European researchers are building quantum computers using glass chips and photons instead of traditional silicon and electricity.

Led by the Italian Polytechnic University of Milan, the project is harnessing the power of light to deliver faster computing and solve real-world challenges.

These chips avoid energy loss by guiding photons through transparent glass, an approach designed to boost precision and reliability in quantum operations.

The collaborative effort includes specialists in photon detection, electronics, and quantum software, all working towards a functioning photonic quantum machine by 2026.

One of its first goals is to help design better lithium-ion batteries, which is vital for Europe’s shift to renewable energy and electric transport.

Europe’s broader ambition is to deploy a quantum-accelerated supercomputer by 2025 and grow a local quantum chip industry by 2030. While talent and innovation are strong, the project highlights a pressing need for greater private investment and commercial scale to match global rivals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

YouTube tightens rules on AI-only videos

YouTube will begin curbing AI-generated content lacking human input to protect content quality and ad revenue. Since July 15, creators must disclose the use of AI and provide genuine creative value to qualify for monetisation.

The platform’s clampdown aims to prevent a flood of low-quality videos, known as ‘AI slop’, that risk overwhelming its algorithm and lowering ad returns. Analysts say Google’s new stance reflects the need to balance AI leadership with platform integrity.

YouTube will still allow AI-assisted content, but it insists creators must offer original contributions such as commentary, editing, or storytelling. Without this, AI-only videos will no longer earn advertising revenue.

The move also addresses rising concerns around copyright, ownership and algorithm overload, which could destabilise the platform’s delicate content ecosystem. Experts warn that unregulated AI use may harm creators who produce high-effort, original material.

Stakeholders say the changes will benefit creators focused on meaningful content while preserving advertiser trust and fair revenue sharing across millions of global partners. YouTube’s approach signals a shift towards responsible AI integration in media platforms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU bets on quantum to regain global influence

European policymakers are turning to quantum technology as a strategic solution to the continent’s growing economic and security challenges.

With the US and China surging ahead in AI, Europe sees quantum innovation as a last-mover advantage it cannot afford to miss.

Quantum computers, sensors, and encryption are already transforming military, industrial and cybersecurity capabilities.

From stealth detection to next-generation batteries, Europe hopes quantum breakthroughs will bolster its defences and revitalise its energy, automotive and pharmaceutical sectors.

Although EU institutions have heavily invested in quantum programmes and Europe trains more engineers than anywhere else, funding gaps persist.

Private investment remains limited, pushing some of the continent’s most promising start-ups abroad in search of capital and scale.

The EU must pair its technical excellence with bold policy reforms to avoid falling behind. Strategic protections, high-risk R&D support and new alliances will be essential to turning scientific strength into global leadership.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!