Hackers use AI to create phishing sites in seconds

Hackers are now using generative AI tools to build convincing phishing websites in under a minute, researchers at Okta have warned. The company discovered that a tool developed by Vercel had been abused to replicate login portals for platforms such as Okta, Microsoft 365 and crypto services.

Using simple prompts like ‘build a copy of the website login.okta.com’, attackers can create fake login pages with little effort or technical skill. Okta’s investigation found no evidence of successful breaches, but noted that threat actors repeatedly used v0 to target new platforms.

Vercel has since removed the fraudulent sites and is working with Okta to create a system for reporting abuse. Security experts are concerned the speed and accessibility of generative AI tools could accelerate low-effort cybercrime on a massive scale.

Researchers also found cloned versions of the v0 tool on GitHub, which may allow continued abuse even if access to the original is restricted. Okta urges organisations to adopt passwordless systems, as traditional phishing detection methods are becoming obsolete.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bots are taking your meetings for you

AI-powered note takers are increasingly filling virtual meeting rooms, sometimes even outnumbering the humans present. Workers are now sending bots to listen, record, and summarise meetings they no longer feel the need to attend themselves.

Major platforms such as Zoom, Teams and Meet offer built-in AI transcription, while startups like Otter and Fathom provide bots that quietly join meetings or listen in through users’ devices. The tools raise new concerns about privacy, consent, and the erosion of human engagement.

Some workers worry that constant recording suppresses honest conversation and makes meetings feel performative. Others, including lawyers and business leaders, point out the legal grey zones created by using these bots without full consent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model predicts sudden cardiac death more accurately

A new AI tool developed by researchers at Johns Hopkins University has shown promise in predicting sudden cardiac death among people with hypertrophic cardiomyopathy (HCM), outperforming existing clinical tools.

The model, known as MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), uses a combination of medical records, cardiac MRI scans, and imaging reports to assess individual patient risk more accurately.

In early trials, MAARS achieved an AUC (area under the curve) score of 0.89 internally and 0.81 in external validation — both significantly higher than traditional risk calculators recommended by American and European guidelines.

The improvement is attributed to its ability to interpret raw cardiac MRI data, particularly scans enhanced with gadolinium, which are often overlooked in standard assessments.

While the tool has the potential to personalise care and reduce unnecessary defibrillator implants, researchers caution that the study was limited to small cohorts from Johns Hopkins and North Carolina’s Sanger Heart & Vascular Institute.

They also acknowledged that MAARS’s reliance on large and complex datasets may pose challenges for widespread clinical use.

Nevertheless, the research team believes MAARS could mark a shift in managing HCM, the most common inherited heart condition.

By identifying hidden patterns in imaging and medical histories, the AI model may protect patients more effectively, especially younger individuals who remain at risk yet receive no benefit from current interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI brings Babylon’s lost hymn back to life

A hymn to the ancient city of Babylon has been reconstructed after 2,100 years using AI to piece together 30 clay tablet fragments. Once lost after Alexander the Great’s conquest, the song praises the city’s grandeur, morals and daily life in exceptional poetic detail.

The hymn, sung to the god Marduk, depicts Babylon as a flourishing paradise filled with jewelled gates, verdant pastures and flowing rivers. AI tools helped researchers quickly assemble and translate the fragments, revealing a third of the original 250-line text.

The poem sheds rare light on Babylonian values, highlighting kindness to foreigners, the release of prisoners and the sanctity of orphans. It also gives a surprising glimpse into the role of women, including cloistered priestesses who acted as midwives.

Parts of the hymn were copied out by schoolchildren up to 1,400 years after it was composed, showing its cultural importance. Scholars now place it alongside the Epic of Gilgamesh as one of the most treasured literary works from ancient Mesopotamia.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta pursues two AI paths with internal tension

Meta’s AI strategy is facing internal friction, with CEO Mark Zuckerberg and Chief AI Scientist Yann LeCun taking sharply different paths toward the company’s future.

While Zuckerberg is doubling down on superintelligence, even launching a new division called Meta Superintelligence Labs, LeCun argues that even ‘cat-level’ intelligence remains a distant goal.

The new lab, led by Scale AI founder Alexandr Wang, marks Zuckerberg’s ambition to accelerate progress in large language models — a move triggered by disappointment in Meta’s recent Llama performance.

Reports suggest the models were tested with customised benchmarks to appear more capable than they were. That prompted frustration at the top, especially after Chinese firm DeepSeek built more advanced tools using Meta’s open-source Llama.

LeCun’s long-standing advocacy for open-source AI now appears at odds with the company’s shifting priorities. While he promotes openness for diversity and democratic access, Zuckerberg’s recent memo did not mention open-source principles.

Internally, executives have even discussed backing away from Llama and turning to closed models like those from OpenAI or Anthropic instead.

Meta is pursuing both visions — supporting LeCun’s research arm, FAIR, and investing in a new, more centralised superintelligence effort. The company has offered massive compensation packages to OpenAI researchers, with some reportedly offered up to $100 million.

Whether Meta continues balancing both philosophies or chooses one outright could determine the direction of its AI legacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lloyds Bank to test neurosymbolic AI for better customer support

Lloyds has partnered with UnlikelyAI to test neurosymbolic AI across its operations to enhance customer service and reinforce its commitment to responsible AI. The trial will occur in Lloyd’s Innovation Sandbox and focus on ensuring accurate, consistent and explainable outputs.

UnlikelyAI combines neural networks with logic-based symbolic reasoning to produce AI that avoids hallucinations and supports transparent decision-making. The firm was founded by William Tunstall-Pedoe, the creator of voice assistant Evi, which helped build Amazon’s Alexa.

Lloyds hopes the technology will drive more personalised customer support and improve internal efficiency. The bank recently migrated its AI platforms to Google Cloud, further strengthening its digital infrastructure.

The announcement follows increased scrutiny from MPs over banks’ reliance on AI and tech vulnerabilities. Lloyds CEO Charlie Nunn believes new large language models could significantly improve customer interaction and personalised advice.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Artists explore meaning and memory at Antwerp Art Weekend

At Antwerp Art Weekend, two standout exhibitions by Eddie Peake and the Amsterdam-based collective Metahaven explored how meaning shifts or falls apart in an age shaped by AI, identity, and emotional complexity.

Metahaven’s film follows a character interacting with an AI assistant while exploring poetry by Eugene Ostashevsky. It contrasts AI’s predictive language models with the unpredictable nature of poetry, using visual metaphors to expose how AI mimics language without fully grasping it.

Meanwhile, Peake’s immersive installation at TICK TACK turned the Belgian gallery into a psychological labyrinth, combining architectural intrusion, raw paintings, and a haunting audio piece. His work considers the weight of identity, sexuality, and memory, moving from aggression to vulnerability.

Despite their differences, both projects provoke questions about how language, identity, and emotion are formed and fractured. Each invites viewers to reconsider the boundaries of expression in a world increasingly influenced by AI and abstraction.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alibaba Cloud opens AI centre in Singapore to drive regional innovation

Alibaba Cloud has launched its AI Global Competency Centre in Singapore to drive innovation and support growing regional demand for cloud and AI technologies. The centre aims to help more than 5,000 businesses and 100,000 developers access advanced tools.

The facility includes an innovation lab offering curated datasets, token credits, and tailored support for real-world AI solutions. A strong focus will be placed on building a robust talent pipeline, with plans to train 100,000 AI professionals each year through partnerships with over 120 universities.

Alibaba Cloud is positioning Singapore as a key digital hub, reinforcing its role in the Asia-Pacific AI ecosystem. The company also announced its third data centre in Malaysia and a second one in the Philippines, scheduled for October, to meet surging demand in Southeast Asia.

The launch marks Alibaba Cloud’s continued global expansion. Executives have underlined their ambition to make Singapore a global AI and cloud innovation leader through strategic partnerships and infrastructure development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Oracle join forces for massive AI data centre expansion

OpenAI had signed a significant cloud computing deal with Oracle worth $30 billion per year, aiming to secure around 4.5GW of capacity through the Stargate joint venture, in which Oracle is a key investor.

Oracle plans to develop several large-scale data centres across the United States, including a potential expansion of its Abilene, Texas, site from 1.2GW to 2GW.

According to reports from Bloomberg and the Financial Times, other locations under consideration include Michigan, Wisconsin, Wyoming, New Mexico, Georgia, Ohio, and Pennsylvania.

In addition to its collaboration with Oracle, OpenAI continues using Microsoft Azure as its primary cloud provider and works with CoreWeave and Google. Notably, OpenAI leverages Google’s custom TPUs in some operations.

Despite the partnerships, OpenAI is pursuing plans to build its data centre infrastructure. The company also intends to construct a Stargate campus in the United Arab Emirates, in collaboration with Oracle, Nvidia, Cisco, SoftBank, and G42, and is scouting global locations for future facilities.

The massive investment underscores OpenAI’s growing compute needs and the global scale of its AI ambitions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!