Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Chicago Sun-Times under fire for fake summer guide

The Chicago Sun-Times has come under scrutiny after its 18 May issue featured a summer guide riddled with fake books, quotes, and experts, many of which appear to have been generated by AI.

Among genuine titles like Call Me By Your Name, readers encountered fictional works wrongly attributed to real authors, such as Min Jin Lee and Rebecca Makkai. The guide also cited individuals who do not appear to exist, including a professor at the University of Colorado and a food anthropologist at Cornell.

Although the guide carried the Sun-Times logo, the newspaper claims it wasn’t written or approved by its editorial team. It stated that the section had been licensed from a national content partner, reportedly Hearst, and is now being removed from digital editions.

Victor Lim, the senior director of audience development, said the paper is investigating how the content was published and is working to update policies to ensure third-party material aligns with newsroom standards.

Several stories in the guide lack bylines or feature names linked to questionable content. Marco Buscaglia, credited for one piece, admitted to using AI ‘for background’ but failed to verify the sources this time, calling the oversight ‘completely embarrassing.’

The incident echoes similar controversies at other media outlets where AI-generated material has been presented alongside legitimate reporting. Even when such content originates from third-party providers, the blurred line between verified journalism and fabricated stories continues to erode reader trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Half of young people would prefer life without the internet

Nearly half of UK youths aged 16 to 21 say they would prefer to grow up without the internet, a new survey reveals. The British Standards Institution found that 68% feel worse after using social media and half would support a digital curfew past 10 p.m.

These findings come as the government considers app usage limits for platforms like TikTok and Instagram. The study also showed that many UK young people feel compelled to hide their online behaviour: 42% admitted lying to parents, and a similar number have fake or burner accounts.

More worryingly, 27% said they have shared their location with strangers, while others admitted pretending to be someone else entirely. Experts argue that digital curfews alone won’t reduce exposure to online harms without broader safeguards in place.

Campaigners and charities are calling for urgent legislation that puts children’s safety before tech profits. The Molly Rose Foundation stressed the danger of algorithms pushing harmful content, while the NSPCC urged a shift towards less addictive and safer online spaces.

The majority of young people surveyed want more protection online and clearer action from tech firms and policymakers.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Can AI replace therapists?

With mental health waitlists at record highs and many struggling to access affordable therapy, some are turning to AI chatbots for support.

Kelly, who waited months for NHS therapy, found solace in character.ai bots, describing them as always available, judgment-free companions. ‘It was like a cheerleader,’ she says, noting how bots helped her cope with anxiety and heartbreak.

But despite emotional benefits for some, AI chatbots are not without serious risks. Character.ai is facing a lawsuit from the mother of a 14-year-old who died by suicide after reportedly forming a harmful relationship with an AI character.

Other bots, like one from the National Eating Disorder Association, were shut down after giving dangerous advice.

Even so, demand is high. In April 2024 alone, 426,000 mental health referrals were made in England, and over a million people are still waiting for care. Apps like Wysa, used by 30 NHS services, aim to fill the gap by offering CBT-based self-help tools and crisis support.

Experts warn, however, that chatbots lack context, emotional intuition, and safeguarding. Professor Hamed Haddadi calls them ‘inexperienced therapists’ that may agree too easily or misunderstand users.

Ethicists like Dr Paula Boddington point to bias and cultural gaps in the AI training data. And privacy is a looming concern: ‘You’re not entirely sure how your data is being used,’ says psychologist Ian MacRae.

Still, users like Nicholas, who lives with autism and depression, say AI has helped when no one else was available. ‘It was so empathetic,’ he recalls, describing how Wysa comforted him during a night of crisis.

A Dartmouth study found AI users saw a 51% drop in depressive symptoms, but even its authors stress bots can’t replace human therapists. Most experts agree AI tools may serve as temporary relief or early intervention—but not as long-term substitutes.

As John, another user, puts it: ‘It’s a stopgap. When nothing else is there, you clutch at straws.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Darth Vader in Fortnite sparks union dispute

The use of an AI-generated Darth Vader voice in Fortnite has triggered a legal dispute between SAG-AFTRA and Epic Games.

According to GamesIndustry.biz, the actors’ union filed an unfair labor practice complaint, claiming it was not informed or consulted about the decision to use an artificial voice model in the game.

In Fortnite’s Galactic Battle season, players who defeat Darth Vader in Battle Royale can recruit him, triggering limited voice interactions powered by conversational AI.

The voice used stems from a licensing agreement with the estate of James Earl Jones, who retired in 2022 and granted rights for AI use of his iconic performance.

While Epic Games has confirmed it had legal permission to use Jones’ voice, SAG-AFTRA alleges the company bypassed union protocols by not informing them or offering the role to a human actor.

The outcome of this dispute could have broader implications for how AI voices are integrated into video games and media going forward, particularly regarding labor rights and union oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lords reject UK AI copyright bill again

The UK government has suffered a second defeat in the House of Lords over its Data (Use and Access) Bill, as peers once again backed a copyright-focused amendment aimed at protecting artists from AI content scraping.

Baroness Kidron, a filmmaker and digital rights advocate, led the charge, accusing ministers of listening to the ‘sweet whisperings of Silicon Valley’ and allowing tech firms to ‘redefine theft’ by exploiting copyrighted material without permission.

Her amendment would force AI companies to disclose their training data sources and obtain consent from rights holders.

The government had previously rejected this amendment, arguing it would lead to ‘piecemeal’ legislation and pre-empt ongoing consultations.

But Kidron’s position was strongly supported across party lines, with peers calling the current AI practices ‘burglary’ and warning of catastrophic damage to the UK’s creative sector.

High-profile artists like Sir Elton John, Paul McCartney, Annie Lennox, and Kate Bush have condemned the government’s stance, with Sir Elton branding ministers ‘losers’ and accusing them of enabling theft.

Peers from Labour, the Lib Dems, the Conservatives, and the crossbenches united to defend UK copyright law, calling the government’s actions a betrayal of the country’s leadership in intellectual property rights.

Labour’s Lord Brennan warned against a ‘double standard’ for AI firms, while Lord Berkeley insisted immediate action was needed to prevent long-term harm.

Technology Minister Baroness Jones countered that no country has resolved the AI-copyright dilemma and warned that the amendment would only create more regulatory confusion.

Nonetheless, peers voted overwhelmingly in favour of Kidron’s proposal—287 to 118—sending the bill back to the Commons with a strengthened demand for transparency and copyright safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Researchers believe AI transparency is within reach by 2027

Top AI researchers admit they still do not fully understand how generative AI models work. Unlike traditional software that follows predefined logic, gen AI models learn to generate responses independently, creating a challenge for developers trying to interpret their decision-making processes.

Dario Amodei, co-founder of Anthropic, described this lack of understanding as unprecedented in tech history. Mechanistic interpretability — a growing academic field — aims to reverse engineer how gen AI models arrive at outputs.

Experts compare the challenge to understanding the human brain, but note that, unlike biology, every digital ‘neuron’ in AI is visible.

Companies like Goodfire are developing tools to map AI reasoning steps and correct errors, helping prevent harmful use or deception. Boston University professor Mark Crovella says interest is surging due to the practical and intellectual appeal of interpreting AI’s inner logic.

Researchers believe the ability to reliably detect biases or intentions within AI models could be achieved within a few years.

This transparency could open the door to AI applications in critical fields like security, and give firms a major competitive edge. Understanding how these systems work is increasingly seen as vital for global tech leadership and public safety.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake voice scams target US officials in phishing surge

Hackers are using deepfake voice and video technology to impersonate senior US government officials and high-profile tech figures in sophisticated phishing campaigns designed to steal sensitive data, the FBI has warned.

Since April, cybercriminals have been contacting current and former federal and state officials through fake voice messages and text messages claiming to be from trusted sources.

The scammers attempt to establish rapport and then direct victims to malicious websites to extract passwords and other private information.

The FBI cautions that if hackers compromise one official’s account, they may use that access to impersonate them further and target others in their network.

The agency urges individuals to verify identities, avoid unsolicited links, and enable multifactor authentication to protect sensitive accounts.

Separately, Polygon co-founder Sandeep Nailwal reported a deepfake scam in which bad actors impersonated him and colleagues via Zoom, urging crypto users to install malicious scripts. He described the attack as ‘horrifying’ and noted the difficulty of reporting such incidents to platforms like Telegram.

The FBI and cybersecurity experts recommend examining media for visual inconsistencies, avoiding software downloads during unverified calls, and never sharing credentials or wallet access unless certain of the source’s legitimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!