Council of Europe picks Jylo to power AI platform

The Council of Europe has chosen Jylo, a European enterprise AI provider, to support over 3,000 users across its organisation.

The decision followed a competitive selection process involving multiple AI vendors, with Jylo standing out for its regulatory compliance and platform adaptability.

As Europe’s leading human rights body, the Council aims to use AI responsibly to support its legal and policy work. Jylo’s platform will streamline document-based workflows and reduce administrative burdens, helping staff focus on critical democratic and legal missions.

Leaders from both Jylo and the Council praised the collaboration. Jylo CEO Shawn Curran said the partnership reflects shared values around regulatory compliance and innovation.

The Council’s CIO, John Hunter, described Jylo’s commitment to secure AI as a perfect fit for the institution’s evolving digital strategy.

Jylo’s AI Assistant and automation features are designed specifically for knowledge-driven organisations. The rollout is expected to strengthen the Council’s internal efficiency and reinforce Jylo’s standing as a trusted AI partner across the European public and legal sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI errors are creating new jobs for human experts

A growing number of writers and developers are finding steady work correcting the flawed outputs of AI systems that businesses use.

From bland marketing copy to broken website code, over-reliance on AI tools like ChatGPT is causing costly setbacks that require human intervention.

In Arizona, writer Sarah Skidd was paid $100 an hour to rewrite poor-quality website text initially produced by AI entirely.

Her experience is echoed by other professionals who now spend most of their time reworking AI content rather than writing from scratch.

UK digital agency owner Sophie Warner reports that clients increasingly use AI-generated code, which has sometimes crashed websites and left businesses vulnerable to security risks. The resulting fixes often take longer and cost more than hiring an expert.

Experts warn that businesses adopt AI too hastily, without proper infrastructure or understanding its limitations.

While AI offers benefits, poor implementation can lead to reputational damage, increased costs, and a growing dependence on professionals to clean up the mess.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Ari Aster warns of AI’s creeping normality ahead of Eddington release

Ari Aster, the director behind Hereditary and Midsommar, is sounding the alarm on AI. In a recent Letterboxd interview promoting his upcoming A24 film Eddington, Aster described his growing unease with AI.

He framed it as a quasi-religious force reshaping reality in ways that are already irreversible. ‘If you talk to these engineers… they talk about AI as a god,’ said Aster. ‘They’re very worshipful of this thing. Whatever space there was between our lived reality and this imaginal reality — that’s disappearing.’

Aster’s comments suggest concern not just about the technology, but about the mindset surrounding its development. Eddington, set during the COVID-19 pandemic, is a neo-Western dark comedy.
It stars Joaquin Phoenix and Pedro Pascal as a sheriff and a mayor locked in a bitter digital feud.

The film reflects Aster’s fears about the dehumanising impact of modern technology. He drew from the ideas of media theorist Marshall McLuhan, referencing his phrase: ‘Man is the sex organ of the machine world.’ Aster asked, ‘Is this technology an extension of us, are we extensions of this technology, or are we here to usher it into being?’

The implication is clear: AI may not simply assist humanity—it might define it. Aster’s films often explore existential dread and loss of control. His perspective on AI taps into similar fears, but in real life. ‘The most uncanny thing about it is that it’s less uncanny than I want it to be,’ he said.

‘I see AI-generated videos, and they look like life. The longer we live in them, the more normal they become.’ The normalisation of artificial content strikes at the core of Aster’s unease. It also mirrors recent tensions in Hollywood over AI’s role in creative industries.

In 2023, WGA and SAG-AFTRA fought for protections against AI-generated scripts and likenesses. Their strike shut down the industry for months, but won language limiting AI use.

The battles highlighted the same issue Aster warns of—losing artistic agency to machines. ‘What happens when content becomes so seamless, it replaces real creativity?’ he seems to ask.

‘Something huge is happening right now, and we have no say in it,’ he said. ‘I can’t believe we’re actually going to live through this and see what happens. Holy cow.’ Eddington is scheduled for release in the United States on 18 July 2025.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Southern Water uses AI to cut sewer floods

AI used in the sewer system has helped prevent homes in West Sussex from flooding, Southern Water has confirmed. The system was able to detect a fatberg in East Lavington before it caused damage.

The AI monitors sewer flow patterns and distinguishes between regular use, rainfall and developing blockages. On 16 June, digital sensors flagged an anomaly—leading teams to clear the fatberg before wastewater could flood gardens or homes.

‘We’re spotting hundreds of potential blockages before it’s too late,’ said Daniel McElhinney, proactive operations control manager at Southern Water. AI has reduced internal flooding by 40% and external flooding by 15%, the utility said.

Around 32,000 sewer level monitors are in place, checking for unusual flow activity that could signal a blockage or leak. Blocked sewers remain the main cause of pollution incidents, according to the company.

‘Most customers don’t realise the average sewer is only the size of an orange,’ McElhinney added. Even a small amount of cooking fat, combined with unflushable items, can lead to fatbergs and serious disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake abuse in schools raises legal and ethical concerns

Deepfake abuse is emerging as a troubling form of peer-on-peer harassment in schools, targeting mainly girls with AI-generated explicit imagery. Tools that once required technical skill are now easily accessible to young people, allowing harmful content to be created and shared in seconds.

Though all US states and Washington, D.C. have laws addressing the distribution of nonconsensual intimate images, many do not cover AI-generated content or address the fact that minors are often both victims and perpetrators.

Some states have begun adapting laws to include proportional sentencing and behavioural interventions for minors. Advocates argue that education on AI, consent and digital literacy is essential to address the root causes and help young people understand the consequences of their actions.

Regulating tech platforms and app developers is also key, as companies continue to profit from tools used in digital exploitation. Experts say schools, families, lawmakers and platforms must share responsibility for curbing the spread of AI-generated abuse and ensuring support for those affected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New digital stylist reshapes Mango’s e-commerce experience

Mango has launched a new AI-powered personal stylist designed to elevate the online shopping experience. Called Mango Stylist, the tool offers fashion advice and outfit suggestions based on each user’s preferences, creating a more interactive and intuitive way to browse.

Available through the Mango app and Instagram chat, the assistant uses natural language to provide styling tips and product recommendations tailored to the individual. It builds on Mango’s previous investment in generative AI and complements its existing customer service assistant, Iris.

The rollout is part of Mango’s broader 4E Strategic Plan, which prioritises technological innovation and customer engagement. By integrating Mango Stylist into its e-commerce platforms, the brand aims to streamline shopping and drive value across key markets, including the UK, Spain, Germany and the US.

Behind the scenes, Mango’s digital, data, and fashion teams collaborated on the project, drawing from over 15 machine learning platforms to fine-tune everything from pricing to product suggestions. The fashion chain sees this development as a major step towards delivering a seamless hybrid shopping experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Digital divination on demand

A growing number of people are turning to ChatGPT for spiritual insight, asking the AI to interpret dreams, deliver tarot readings or even channel messages from lost loved ones. Many describe these exchanges as oddly accurate or deeply comforting, praising the chatbot’s non-judgmental tone and round-the-clock availability.

For some, the experience borders on mystical. Users say ChatGPT feels like a mirror to their psyche, capable of sparking epiphanies or emotional release. The chatbot’s smooth, responsive dialogue can simulate wisdom, offering what feels like personalised guidance.

However, experts warn there are risks in mistaking machine learning for metaphysical truth. AI can invent responses, flatter users or reinforce biases, all without genuine understanding. Relying too heavily on a chatbot for spiritual clarity, psychologists say, may dull critical thinking or worsen underlying mental health struggles.

Still, others see promise in using AI as a reflective aid rather than a guru. Spiritual advisors suggest the tool may help frame questions or organise thoughts, but caution that lasting insight comes through lived experience, not code. In an era of instant answers, they say, meaningful growth still takes time, community and reflection.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Robotics set to have a ChatGPT moment

Vinod Khosla, the venture capitalist behind early bets in OpenAI, predicts a breakthrough in robotics akin to ChatGPT will arrive within two to three years. He envisions adaptable, humanoid robots able to handle kitchen tasks, from chopping vegetables to washing dishes, for around £230 to £307 per month.

Current robots, particularly those from Chinese manufacturers, struggle in new environments and lack true self‑learning, a gap Khosla believes will soon close. He adds that while large established firms like Apple have not taken the lead, startups are where transformative innovation is most likely to come.

Nvidia CEO Jensen Huang sees a vast future in physical AI. Huang labels the robotics sector a multitrillion‑dollar opportunity and highlights autonomous vehicles as the first major commercial application. Similarly, Amazon plans to increase hiring in AI and robotics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Beware of fake deals as Prime Day approaches

A surge in online scams is expected ahead of Amazon’s Prime Day, which runs from 8 to 11 July, as fraudsters use increasingly sophisticated tactics. Advice Direct Scotland is issuing a warning to shoppers across Scotland: AI-enhanced phishing emails, bogus renewal notices, and fake refund offers are on the rise.

In one common ruse, scammers impersonate Amazon in messages stating your Prime membership has expired or that your account needs urgent verification. Others go further, claiming your Amazon account has been hacked and demanding remote access to your device, something the real company never does. Victims in Scotland reportedly lost around £860,000 last year to similar crime, as scam technology becomes more convincing.

Advice Direct Scotland reminds shoppers not to rush and to trust their instincts. Genuine Amazon communications will never ask for remote access, passwords, or financial information over email or phone. If in doubt, hang up and check your account via official channels, or reach out to the charity’s ScamWatch hotline.

Those seeking guidance can contact Advice Direct Scotland via phone or online chat, or report suspected scams using the free ScamWatch tool. With Prime Day bargains tempting many, staying vigilant could mean avoiding a costly mistake.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!