EU PREVAIL project opens Edge AI platform to users in June

The European Union’s PREVAIL project is preparing to open its Edge AI services to external users in June 2025.

Coordinated by Europe’s top research and technology organisations—CEA-Leti, Fraunhofer-Gesellschaft, imec, and VTT—the initiative offers a shared, multi-hub infrastructure designed to speed up the development and commercialisation of next-generation Edge AI technologies.

Through its platform, European designers will gain access to advanced chip prototyping capabilities and full design support using standard commercial tools.

PREVAIL combines commercial foundry processes with advanced technology modules developed in partner clean rooms. These include embedded non-volatile memories (eNVM), silicon photonics, and 3D integration technologies such as silicon interposers and packaging innovations.

Initial demonstrators, already in development with industry partners, will serve as test cases to ensure compatibility with a broad range of applications and future scalability.

From July 2025, a €20 million EU-funded call under the ‘Low Power Edge AI’ initiative will help selected customers co-finance their access to the platform. Whether supported by EU funds or independently financed, users will be able to design chips using one of four shared platforms.

The consortium has also set up a user interface team to manage technical support and provide access to Process Design Kits and Design Rule Manuals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital divination on demand

A growing number of people are turning to ChatGPT for spiritual insight, asking the AI to interpret dreams, deliver tarot readings or even channel messages from lost loved ones. Many describe these exchanges as oddly accurate or deeply comforting, praising the chatbot’s non-judgmental tone and round-the-clock availability.

For some, the experience borders on mystical. Users say ChatGPT feels like a mirror to their psyche, capable of sparking epiphanies or emotional release. The chatbot’s smooth, responsive dialogue can simulate wisdom, offering what feels like personalised guidance.

However, experts warn there are risks in mistaking machine learning for metaphysical truth. AI can invent responses, flatter users or reinforce biases, all without genuine understanding. Relying too heavily on a chatbot for spiritual clarity, psychologists say, may dull critical thinking or worsen underlying mental health struggles.

Still, others see promise in using AI as a reflective aid rather than a guru. Spiritual advisors suggest the tool may help frame questions or organise thoughts, but caution that lasting insight comes through lived experience, not code. In an era of instant answers, they say, meaningful growth still takes time, community and reflection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Robotics set to have a ChatGPT moment

Vinod Khosla, the venture capitalist behind early bets in OpenAI, predicts a breakthrough in robotics akin to ChatGPT will arrive within two to three years. He envisions adaptable, humanoid robots able to handle kitchen tasks, from chopping vegetables to washing dishes, for around £230 to £307 per month.

Current robots, particularly those from Chinese manufacturers, struggle in new environments and lack true self‑learning, a gap Khosla believes will soon close. He adds that while large established firms like Apple have not taken the lead, startups are where transformative innovation is most likely to come.

Nvidia CEO Jensen Huang sees a vast future in physical AI. Huang labels the robotics sector a multitrillion‑dollar opportunity and highlights autonomous vehicles as the first major commercial application. Similarly, Amazon plans to increase hiring in AI and robotics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beware of fake deals as Prime Day approaches

A surge in online scams is expected ahead of Amazon’s Prime Day, which runs from 8 to 11 July, as fraudsters use increasingly sophisticated tactics. Advice Direct Scotland is issuing a warning to shoppers across Scotland: AI-enhanced phishing emails, bogus renewal notices, and fake refund offers are on the rise.

In one common ruse, scammers impersonate Amazon in messages stating your Prime membership has expired or that your account needs urgent verification. Others go further, claiming your Amazon account has been hacked and demanding remote access to your device, something the real company never does. Victims in Scotland reportedly lost around £860,000 last year to similar crime, as scam technology becomes more convincing.

Advice Direct Scotland reminds shoppers not to rush and to trust their instincts. Genuine Amazon communications will never ask for remote access, passwords, or financial information over email or phone. If in doubt, hang up and check your account via official channels, or reach out to the charity’s ScamWatch hotline.

Those seeking guidance can contact Advice Direct Scotland via phone or online chat, or report suspected scams using the free ScamWatch tool. With Prime Day bargains tempting many, staying vigilant could mean avoiding a costly mistake.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use AI to create phishing sites in seconds

Hackers are now using generative AI tools to build convincing phishing websites in under a minute, researchers at Okta have warned. The company discovered that a tool developed by Vercel had been abused to replicate login portals for platforms such as Okta, Microsoft 365 and crypto services.

Using simple prompts like ‘build a copy of the website login.okta.com’, attackers can create fake login pages with little effort or technical skill. Okta’s investigation found no evidence of successful breaches, but noted that threat actors repeatedly used v0 to target new platforms.

Vercel has since removed the fraudulent sites and is working with Okta to create a system for reporting abuse. Security experts are concerned the speed and accessibility of generative AI tools could accelerate low-effort cybercrime on a massive scale.

Researchers also found cloned versions of the v0 tool on GitHub, which may allow continued abuse even if access to the original is restricted. Okta urges organisations to adopt passwordless systems, as traditional phishing detection methods are becoming obsolete.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bots are taking your meetings for you

AI-powered note takers are increasingly filling virtual meeting rooms, sometimes even outnumbering the humans present. Workers are now sending bots to listen, record, and summarise meetings they no longer feel the need to attend themselves.

Major platforms such as Zoom, Teams and Meet offer built-in AI transcription, while startups like Otter and Fathom provide bots that quietly join meetings or listen in through users’ devices. The tools raise new concerns about privacy, consent, and the erosion of human engagement.

Some workers worry that constant recording suppresses honest conversation and makes meetings feel performative. Others, including lawyers and business leaders, point out the legal grey zones created by using these bots without full consent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model predicts sudden cardiac death more accurately

A new AI tool developed by researchers at Johns Hopkins University has shown promise in predicting sudden cardiac death among people with hypertrophic cardiomyopathy (HCM), outperforming existing clinical tools.

The model, known as MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), uses a combination of medical records, cardiac MRI scans, and imaging reports to assess individual patient risk more accurately.

In early trials, MAARS achieved an AUC (area under the curve) score of 0.89 internally and 0.81 in external validation — both significantly higher than traditional risk calculators recommended by American and European guidelines.

The improvement is attributed to its ability to interpret raw cardiac MRI data, particularly scans enhanced with gadolinium, which are often overlooked in standard assessments.

While the tool has the potential to personalise care and reduce unnecessary defibrillator implants, researchers caution that the study was limited to small cohorts from Johns Hopkins and North Carolina’s Sanger Heart & Vascular Institute.

They also acknowledged that MAARS’s reliance on large and complex datasets may pose challenges for widespread clinical use.

Nevertheless, the research team believes MAARS could mark a shift in managing HCM, the most common inherited heart condition.

By identifying hidden patterns in imaging and medical histories, the AI model may protect patients more effectively, especially younger individuals who remain at risk yet receive no benefit from current interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI brings Babylon’s lost hymn back to life

A hymn to the ancient city of Babylon has been reconstructed after 2,100 years using AI to piece together 30 clay tablet fragments. Once lost after Alexander the Great’s conquest, the song praises the city’s grandeur, morals and daily life in exceptional poetic detail.

The hymn, sung to the god Marduk, depicts Babylon as a flourishing paradise filled with jewelled gates, verdant pastures and flowing rivers. AI tools helped researchers quickly assemble and translate the fragments, revealing a third of the original 250-line text.

The poem sheds rare light on Babylonian values, highlighting kindness to foreigners, the release of prisoners and the sanctity of orphans. It also gives a surprising glimpse into the role of women, including cloistered priestesses who acted as midwives.

Parts of the hymn were copied out by schoolchildren up to 1,400 years after it was composed, showing its cultural importance. Scholars now place it alongside the Epic of Gilgamesh as one of the most treasured literary works from ancient Mesopotamia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pursues two AI paths with internal tension

Meta’s AI strategy is facing internal friction, with CEO Mark Zuckerberg and Chief AI Scientist Yann LeCun taking sharply different paths toward the company’s future.

While Zuckerberg is doubling down on superintelligence, even launching a new division called Meta Superintelligence Labs, LeCun argues that even ‘cat-level’ intelligence remains a distant goal.

The new lab, led by Scale AI founder Alexandr Wang, marks Zuckerberg’s ambition to accelerate progress in large language models — a move triggered by disappointment in Meta’s recent Llama performance.

Reports suggest the models were tested with customised benchmarks to appear more capable than they were. That prompted frustration at the top, especially after Chinese firm DeepSeek built more advanced tools using Meta’s open-source Llama.

LeCun’s long-standing advocacy for open-source AI now appears at odds with the company’s shifting priorities. While he promotes openness for diversity and democratic access, Zuckerberg’s recent memo did not mention open-source principles.

Internally, executives have even discussed backing away from Llama and turning to closed models like those from OpenAI or Anthropic instead.

Meta is pursuing both visions — supporting LeCun’s research arm, FAIR, and investing in a new, more centralised superintelligence effort. The company has offered massive compensation packages to OpenAI researchers, with some reportedly offered up to $100 million.

Whether Meta continues balancing both philosophies or chooses one outright could determine the direction of its AI legacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!