Ari Aster warns of AI’s creeping normality ahead of Eddington release

Ari Aster, the director behind Hereditary and Midsommar, is sounding the alarm on AI. In a recent Letterboxd interview promoting his upcoming A24 film Eddington, Aster described his growing unease with AI.

He framed it as a quasi-religious force reshaping reality in ways that are already irreversible. ‘If you talk to these engineers… they talk about AI as a god,’ said Aster. ‘They’re very worshipful of this thing. Whatever space there was between our lived reality and this imaginal reality — that’s disappearing.’

Aster’s comments suggest concern not just about the technology, but about the mindset surrounding its development. Eddington, set during the COVID-19 pandemic, is a neo-Western dark comedy.
It stars Joaquin Phoenix and Pedro Pascal as a sheriff and a mayor locked in a bitter digital feud.

The film reflects Aster’s fears about the dehumanising impact of modern technology. He drew from the ideas of media theorist Marshall McLuhan, referencing his phrase: ‘Man is the sex organ of the machine world.’ Aster asked, ‘Is this technology an extension of us, are we extensions of this technology, or are we here to usher it into being?’

The implication is clear: AI may not simply assist humanity—it might define it. Aster’s films often explore existential dread and loss of control. His perspective on AI taps into similar fears, but in real life. ‘The most uncanny thing about it is that it’s less uncanny than I want it to be,’ he said.

‘I see AI-generated videos, and they look like life. The longer we live in them, the more normal they become.’ The normalisation of artificial content strikes at the core of Aster’s unease. It also mirrors recent tensions in Hollywood over AI’s role in creative industries.

In 2023, WGA and SAG-AFTRA fought for protections against AI-generated scripts and likenesses. Their strike shut down the industry for months, but won language limiting AI use.

The battles highlighted the same issue Aster warns of—losing artistic agency to machines. ‘What happens when content becomes so seamless, it replaces real creativity?’ he seems to ask.

‘Something huge is happening right now, and we have no say in it,’ he said. ‘I can’t believe we’re actually going to live through this and see what happens. Holy cow.’ Eddington is scheduled for release in the United States on 18 July 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Southern Water uses AI to cut sewer floods

AI used in the sewer system has helped prevent homes in West Sussex from flooding, Southern Water has confirmed. The system was able to detect a fatberg in East Lavington before it caused damage.

The AI monitors sewer flow patterns and distinguishes between regular use, rainfall and developing blockages. On 16 June, digital sensors flagged an anomaly—leading teams to clear the fatberg before wastewater could flood gardens or homes.

‘We’re spotting hundreds of potential blockages before it’s too late,’ said Daniel McElhinney, proactive operations control manager at Southern Water. AI has reduced internal flooding by 40% and external flooding by 15%, the utility said.

Around 32,000 sewer level monitors are in place, checking for unusual flow activity that could signal a blockage or leak. Blocked sewers remain the main cause of pollution incidents, according to the company.

‘Most customers don’t realise the average sewer is only the size of an orange,’ McElhinney added. Even a small amount of cooking fat, combined with unflushable items, can lead to fatbergs and serious disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake abuse in schools raises legal and ethical concerns

Deepfake abuse is emerging as a troubling form of peer-on-peer harassment in schools, targeting mainly girls with AI-generated explicit imagery. Tools that once required technical skill are now easily accessible to young people, allowing harmful content to be created and shared in seconds.

Though all US states and Washington, D.C. have laws addressing the distribution of nonconsensual intimate images, many do not cover AI-generated content or address the fact that minors are often both victims and perpetrators.

Some states have begun adapting laws to include proportional sentencing and behavioural interventions for minors. Advocates argue that education on AI, consent and digital literacy is essential to address the root causes and help young people understand the consequences of their actions.

Regulating tech platforms and app developers is also key, as companies continue to profit from tools used in digital exploitation. Experts say schools, families, lawmakers and platforms must share responsibility for curbing the spread of AI-generated abuse and ensuring support for those affected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New digital stylist reshapes Mango’s e-commerce experience

Mango has launched a new AI-powered personal stylist designed to elevate the online shopping experience. Called Mango Stylist, the tool offers fashion advice and outfit suggestions based on each user’s preferences, creating a more interactive and intuitive way to browse.

Available through the Mango app and Instagram chat, the assistant uses natural language to provide styling tips and product recommendations tailored to the individual. It builds on Mango’s previous investment in generative AI and complements its existing customer service assistant, Iris.

The rollout is part of Mango’s broader 4E Strategic Plan, which prioritises technological innovation and customer engagement. By integrating Mango Stylist into its e-commerce platforms, the brand aims to streamline shopping and drive value across key markets, including the UK, Spain, Germany and the US.

Behind the scenes, Mango’s digital, data, and fashion teams collaborated on the project, drawing from over 15 machine learning platforms to fine-tune everything from pricing to product suggestions. The fashion chain sees this development as a major step towards delivering a seamless hybrid shopping experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital divination on demand

A growing number of people are turning to ChatGPT for spiritual insight, asking the AI to interpret dreams, deliver tarot readings or even channel messages from lost loved ones. Many describe these exchanges as oddly accurate or deeply comforting, praising the chatbot’s non-judgmental tone and round-the-clock availability.

For some, the experience borders on mystical. Users say ChatGPT feels like a mirror to their psyche, capable of sparking epiphanies or emotional release. The chatbot’s smooth, responsive dialogue can simulate wisdom, offering what feels like personalised guidance.

However, experts warn there are risks in mistaking machine learning for metaphysical truth. AI can invent responses, flatter users or reinforce biases, all without genuine understanding. Relying too heavily on a chatbot for spiritual clarity, psychologists say, may dull critical thinking or worsen underlying mental health struggles.

Still, others see promise in using AI as a reflective aid rather than a guru. Spiritual advisors suggest the tool may help frame questions or organise thoughts, but caution that lasting insight comes through lived experience, not code. In an era of instant answers, they say, meaningful growth still takes time, community and reflection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Robotics set to have a ChatGPT moment

Vinod Khosla, the venture capitalist behind early bets in OpenAI, predicts a breakthrough in robotics akin to ChatGPT will arrive within two to three years. He envisions adaptable, humanoid robots able to handle kitchen tasks, from chopping vegetables to washing dishes, for around £230 to £307 per month.

Current robots, particularly those from Chinese manufacturers, struggle in new environments and lack true self‑learning, a gap Khosla believes will soon close. He adds that while large established firms like Apple have not taken the lead, startups are where transformative innovation is most likely to come.

Nvidia CEO Jensen Huang sees a vast future in physical AI. Huang labels the robotics sector a multitrillion‑dollar opportunity and highlights autonomous vehicles as the first major commercial application. Similarly, Amazon plans to increase hiring in AI and robotics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beware of fake deals as Prime Day approaches

A surge in online scams is expected ahead of Amazon’s Prime Day, which runs from 8 to 11 July, as fraudsters use increasingly sophisticated tactics. Advice Direct Scotland is issuing a warning to shoppers across Scotland: AI-enhanced phishing emails, bogus renewal notices, and fake refund offers are on the rise.

In one common ruse, scammers impersonate Amazon in messages stating your Prime membership has expired or that your account needs urgent verification. Others go further, claiming your Amazon account has been hacked and demanding remote access to your device, something the real company never does. Victims in Scotland reportedly lost around £860,000 last year to similar crime, as scam technology becomes more convincing.

Advice Direct Scotland reminds shoppers not to rush and to trust their instincts. Genuine Amazon communications will never ask for remote access, passwords, or financial information over email or phone. If in doubt, hang up and check your account via official channels, or reach out to the charity’s ScamWatch hotline.

Those seeking guidance can contact Advice Direct Scotland via phone or online chat, or report suspected scams using the free ScamWatch tool. With Prime Day bargains tempting many, staying vigilant could mean avoiding a costly mistake.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers use AI to create phishing sites in seconds

Hackers are now using generative AI tools to build convincing phishing websites in under a minute, researchers at Okta have warned. The company discovered that a tool developed by Vercel had been abused to replicate login portals for platforms such as Okta, Microsoft 365 and crypto services.

Using simple prompts like ‘build a copy of the website login.okta.com’, attackers can create fake login pages with little effort or technical skill. Okta’s investigation found no evidence of successful breaches, but noted that threat actors repeatedly used v0 to target new platforms.

Vercel has since removed the fraudulent sites and is working with Okta to create a system for reporting abuse. Security experts are concerned the speed and accessibility of generative AI tools could accelerate low-effort cybercrime on a massive scale.

Researchers also found cloned versions of the v0 tool on GitHub, which may allow continued abuse even if access to the original is restricted. Okta urges organisations to adopt passwordless systems, as traditional phishing detection methods are becoming obsolete.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bots are taking your meetings for you

AI-powered note takers are increasingly filling virtual meeting rooms, sometimes even outnumbering the humans present. Workers are now sending bots to listen, record, and summarise meetings they no longer feel the need to attend themselves.

Major platforms such as Zoom, Teams and Meet offer built-in AI transcription, while startups like Otter and Fathom provide bots that quietly join meetings or listen in through users’ devices. The tools raise new concerns about privacy, consent, and the erosion of human engagement.

Some workers worry that constant recording suppresses honest conversation and makes meetings feel performative. Others, including lawyers and business leaders, point out the legal grey zones created by using these bots without full consent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model predicts sudden cardiac death more accurately

A new AI tool developed by researchers at Johns Hopkins University has shown promise in predicting sudden cardiac death among people with hypertrophic cardiomyopathy (HCM), outperforming existing clinical tools.

The model, known as MAARS (Multimodal AI for ventricular Arrhythmia Risk Stratification), uses a combination of medical records, cardiac MRI scans, and imaging reports to assess individual patient risk more accurately.

In early trials, MAARS achieved an AUC (area under the curve) score of 0.89 internally and 0.81 in external validation — both significantly higher than traditional risk calculators recommended by American and European guidelines.

The improvement is attributed to its ability to interpret raw cardiac MRI data, particularly scans enhanced with gadolinium, which are often overlooked in standard assessments.

While the tool has the potential to personalise care and reduce unnecessary defibrillator implants, researchers caution that the study was limited to small cohorts from Johns Hopkins and North Carolina’s Sanger Heart & Vascular Institute.

They also acknowledged that MAARS’s reliance on large and complex datasets may pose challenges for widespread clinical use.

Nevertheless, the research team believes MAARS could mark a shift in managing HCM, the most common inherited heart condition.

By identifying hidden patterns in imaging and medical histories, the AI model may protect patients more effectively, especially younger individuals who remain at risk yet receive no benefit from current interventions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!