Google launches Veo 3 video for Gemini users globally

Google has begun rolling out its Veo 3 video-generation model to Gemini users across more than 159 countries. The advanced AI tool allows subscribers to create short video clips simply by entering text prompts.

Access to Veo 3 is limited to those on Google’s AI Pro plan, and usage is currently restricted to three videos per day. The tool can generate clips lasting up to eight seconds, enabling rapid video creation for a variety of purposes.

Google is already developing additional features for Gemini, including the ability to turn images into videos, according to product director Josh Woodward.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU PREVAIL project opens Edge AI platform to users in June

The European Union’s PREVAIL project is preparing to open its Edge AI services to external users in June 2025.

Coordinated by Europe’s top research and technology organisations—CEA-Leti, Fraunhofer-Gesellschaft, imec, and VTT—the initiative offers a shared, multi-hub infrastructure designed to speed up the development and commercialisation of next-generation Edge AI technologies.

Through its platform, European designers will gain access to advanced chip prototyping capabilities and full design support using standard commercial tools.

PREVAIL combines commercial foundry processes with advanced technology modules developed in partner clean rooms. These include embedded non-volatile memories (eNVM), silicon photonics, and 3D integration technologies such as silicon interposers and packaging innovations.

Initial demonstrators, already in development with industry partners, will serve as test cases to ensure compatibility with a broad range of applications and future scalability.

From July 2025, a €20 million EU-funded call under the ‘Low Power Edge AI’ initiative will help selected customers co-finance their access to the platform. Whether supported by EU funds or independently financed, users will be able to design chips using one of four shared platforms.

The consortium has also set up a user interface team to manage technical support and provide access to Process Design Kits and Design Rule Manuals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattacks drain millions from hospitality sector

The booming hospitality sector handles sensitive guest information daily, from passports to payment details, making it a prime target for cybercriminals. Recent figures reveal the average cost of a data breach in hospitality rose to $3.86 million in 2024, with over 14,000 critical vulnerabilities detected in hotel networks worldwide.

Complex systems connecting guests, staff, vendors, and devices like smart locks multiply entry points for attackers. High staff turnover and frequent reliance on temporary workers add to the sector’s cybersecurity challenges.

New employees are often more susceptible to phishing and social engineering attacks, as demonstrated by costly breaches such as the 2023 MGM Resorts incident. Artificial intelligence helps boost defences but isn’t a cure-all and must be used with staff training and clear policies.

Recent attacks on major hotel brands have exposed millions of customer records, intensifying pressure on hospitality firms to meet privacy regulations like GDPR. Maintaining robust cybersecurity requires continuous updates to policies, vendor checks, and committed leadership support.

Hotels lagging in these areas risk severe financial and reputational damage in an increasingly hostile cyber landscape.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BT launches cyber training as small businesses struggle with threats

Cyber attacks aren’t just a problem for big-name brands. Small and medium businesses are increasingly in the crosshairs, according to new research from BT and Be the Business.

Two in five SMEs have never provided cyber security training to their staff, despite a sharp increase in attacks. In the past year alone, 42% of small firms and 67% of medium-sized companies reported breaches.

Phishing remains the most common threat, affecting 85% of businesses. But more advanced tactics are spreading fast, including ransomware and ‘quishing’ scams — where fake QR codes are used to steal data.

Recovering from a breach is costly. Micro and small businesses spend nearly £8,000 on average to recover from their most serious incident. The figure excludes reputational damage and long-term disruption.

To help tackle the issue, BT has launched a new training programme with Be the Business. The course offers practical, low-cost cyber advice designed for companies without dedicated IT support.

The programme focuses on real-world threats, including AI-driven scams, and offers guidance on steps like password hygiene, two-factor authentication, and safe software practices.

Although 69% of SME leaders are now exploring AI tools to help defend their systems, 18% also list AI as one of their top cyber threats — a sign of both potential and risk.

Experts warn that basic precautions still matter most. With free and affordable training options now widely available, small firms have more tools than ever to improve their cyber defences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model predicts sudden cardiac death more accurately

A new AI tool developed by researchers at Johns Hopkins University has shown promise in predicting sudden cardiac death among people with hypertrophic cardiomyopathy (HCM), outperforming existing clinical tools.

The model, known as MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), uses a combination of medical records, cardiac MRI scans, and imaging reports to assess individual patient risk more accurately.

In early trials, MAARS achieved an AUC (area under the curve) score of 0.89 internally and 0.81 in external validation — both significantly higher than traditional risk calculators recommended by American and European guidelines.

The improvement is attributed to its ability to interpret raw cardiac MRI data, particularly scans enhanced with gadolinium, which are often overlooked in standard assessments.

While the tool has the potential to personalise care and reduce unnecessary defibrillator implants, researchers caution that the study was limited to small cohorts from Johns Hopkins and North Carolina’s Sanger Heart & Vascular Institute.

They also acknowledged that MAARS’s reliance on large and complex datasets may pose challenges for widespread clinical use.

Nevertheless, the research team believes MAARS could mark a shift in managing HCM, the most common inherited heart condition.

By identifying hidden patterns in imaging and medical histories, the AI model may protect patients more effectively, especially younger individuals who remain at risk yet receive no benefit from current interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pursues two AI paths with internal tension

Meta’s AI strategy is facing internal friction, with CEO Mark Zuckerberg and Chief AI Scientist Yann LeCun taking sharply different paths toward the company’s future.

While Zuckerberg is doubling down on superintelligence, even launching a new division called Meta Superintelligence Labs, LeCun argues that even ‘cat-level’ intelligence remains a distant goal.

The new lab, led by Scale AI founder Alexandr Wang, marks Zuckerberg’s ambition to accelerate progress in large language models — a move triggered by disappointment in Meta’s recent Llama performance.

Reports suggest the models were tested with customised benchmarks to appear more capable than they were. That prompted frustration at the top, especially after Chinese firm DeepSeek built more advanced tools using Meta’s open-source Llama.

LeCun’s long-standing advocacy for open-source AI now appears at odds with the company’s shifting priorities. While he promotes openness for diversity and democratic access, Zuckerberg’s recent memo did not mention open-source principles.

Internally, executives have even discussed backing away from Llama and turning to closed models like those from OpenAI or Anthropic instead.

Meta is pursuing both visions — supporting LeCun’s research arm, FAIR, and investing in a new, more centralised superintelligence effort. The company has offered massive compensation packages to OpenAI researchers, with some reportedly offered up to $100 million.

Whether Meta continues balancing both philosophies or chooses one outright could determine the direction of its AI legacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Oracle join forces for massive AI data centre expansion

OpenAI had signed a significant cloud computing deal with Oracle worth $30 billion per year, aiming to secure around 4.5GW of capacity through the Stargate joint venture, in which Oracle is a key investor.

Oracle plans to develop several large-scale data centres across the United States, including a potential expansion of its Abilene, Texas, site from 1.2GW to 2GW.

According to reports from Bloomberg and the Financial Times, other locations under consideration include Michigan, Wisconsin, Wyoming, New Mexico, Georgia, Ohio, and Pennsylvania.

In addition to its collaboration with Oracle, OpenAI continues using Microsoft Azure as its primary cloud provider and works with CoreWeave and Google. Notably, OpenAI leverages Google’s custom TPUs in some operations.

Despite the partnerships, OpenAI is pursuing plans to build its data centre infrastructure. The company also intends to construct a Stargate campus in the United Arab Emirates, in collaboration with Oracle, Nvidia, Cisco, SoftBank, and G42, and is scouting global locations for future facilities.

The massive investment underscores OpenAI’s growing compute needs and the global scale of its AI ambitions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU races to catch up in quantum tech amid cybersecurity fears

The European Union is ramping up efforts to lead in quantum computing, but cybersecurity experts warn that the technology could upend digital security as we know it.

In a new strategy published Wednesday, the European Commission admitted that Europe trails the United States and China in commercialising quantum technology, despite its strong academic presence. The bloc is now calling for more private investment to close the gap.

Quantum computing offers revolutionary potential, from drug discovery to defence applications. But its power poses a serious risk: it could break today’s internet encryption.

Current digital security relies on public key cryptography — complex maths that conventional computers can’t solve. But quantum machines could one day easily break these codes, making sensitive data readable to malicious actors.

Experts fear a ‘store now, decrypt later’ scenario, where adversaries collect encrypted data now and crack it once quantum capabilities mature. That could expose government secrets and critical infrastructure.

The EU is also concerned about losing control over homegrown tech companies to foreign investors. While Europe leads in quantum research output, it only receives 5% of global private funding. In contrast, the US and China attract over 90% combined.

European cybersecurity agencies published a roadmap for transitioning to post-quantum cryptography to address the threat. The aim is to secure critical infrastructure by 2030 — a deadline shared by the US, UK, and Australia.

IBM recently said it could release a workable quantum computer by 2029, highlighting the urgency of the challenge. Experts stress that replacing encryption is only part of the task. The broader transition will affect billions of systems, requiring enormous technical and logistical effort.

Governments are already reacting. Some EU states have imposed export restrictions on quantum tech, fearing their communications could be exposed. Despite the risks, European officials say the worst-case scenarios are not inevitable, but doing nothing is not an option.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

DeepSeek gains business traction despite security risks

Chinese AI company DeepSeek is gaining traction in global markets despite growing concerns about national security.

While government bans remain in place across several countries, businesses are turning to DeepSeek’s models for low cost and firm performance, often ranking just behind OpenAI’s ChatGPT and Google’s Gemini in traffic and market share.

DeepSeek’s appeal lies in its efficiency. With advanced engineering techniques like its ‘mixture-of-experts’ system, the company has reduced computing costs by activating fewer parameters without a noticeable drop in performance.

Training costs have reportedly been as low as $5.6 million — a fraction of what rivals like Anthropic spend. As a result, DeepSeek’s models are now available across major platforms, including AWS, Azure, Google Cloud, and even open-source repositories like GitHub and Hugging Face.

However, the way DeepSeek is accessed matters. While companies can safely self-host the models in private environments, using the mobile app or website means sending data to Chinese servers, a key reason for widespread bans on public-sector use.

Individual consumers often lack the technical control enterprises enjoy, making their data more vulnerable to foreign access.

Despite the political tension, demand continues to grow. US firms are exploring DeepSeek as a cost-saving alternative, and its models are being deployed in industries from telecoms to finance.

Even Perplexity, an American AI firm, has used DeepSeek R1 to power a research tool hosted entirely on Western servers. DeepSeek’s open-source edge and rapid technical progress are helping it close the gap with much larger AI competitors — quietly but significantly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!