Chicago Sun-Times under fire for fake summer guide

The Chicago Sun-Times has come under scrutiny after its 18 May issue featured a summer guide riddled with fake books, quotes, and experts, many of which appear to have been generated by AI.

Among genuine titles like Call Me By Your Name, readers encountered fictional works wrongly attributed to real authors, such as Min Jin Lee and Rebecca Makkai. The guide also cited individuals who do not appear to exist, including a professor at the University of Colorado and a food anthropologist at Cornell.

Although the guide carried the Sun-Times logo, the newspaper claims it wasn’t written or approved by its editorial team. It stated that the section had been licensed from a national content partner, reportedly Hearst, and is now being removed from digital editions.

Victor Lim, the senior director of audience development, said the paper is investigating how the content was published and is working to update policies to ensure third-party material aligns with newsroom standards.

Several stories in the guide lack bylines or feature names linked to questionable content. Marco Buscaglia, credited for one piece, admitted to using AI ‘for background’ but failed to verify the sources this time, calling the oversight ‘completely embarrassing.’

The incident echoes similar controversies at other media outlets where AI-generated material has been presented alongside legitimate reporting. Even when such content originates from third-party providers, the blurred line between verified journalism and fabricated stories continues to erode reader trust.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google unveils Veo 3 with audio capabilities

Google has introduced Veo 3, its most advanced video-generating AI model to date, capable of producing sound effects, ambient noise and dialogue to accompany the footage it creates.

Announced at the Google I/O 2025 developer conference, Veo 3 is available through the Gemini chatbot for those subscribed to the $249.99-per-month AI Ultra plan. The model accepts both text and image prompts, allowing users to generate audiovisual scenes rather than silent clips.

Unlike other AI tools, Veo 3 can analyse raw video pixels to synchronise audio automatically, offering a notable edge in an increasingly crowded field of video-generation platforms. While sound-generating AI isn’t new, Google claims Veo 3’s ability to match audio precisely with visual content sets it apart.

The progress builds on DeepMind’s earlier work in ‘video-to-audio’ AI and may rely on training data from YouTube, though Google hasn’t confirmed this.

To help prevent misuse, such as the creation of deepfakes, Google says Veo 3 includes SynthID, its proprietary watermarking technology that embeds invisible markers in every generated frame. Despite these safeguards, concerns remain within the creative industry.

Artists fear tools like Veo 3 could replace thousands of jobs, with a recent study predicting over 100,000 roles in film and animation could be affected by AI before 2026.

Alongside Veo 3, Google has also updated Veo 2. The earlier model now allows users to edit videos more precisely, adding or removing elements and adjusting camera movements. These features are expected to become available soon on Google’s Vertex AI API platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft brings Grok AI to Azure

Microsoft has become one of the first major cloud providers to offer managed access to Grok, the controversial AI model from Elon Musk’s xAI startup.

Now available through the Azure AI Foundry platform, both Grok 3 and Grok 3 mini will be billed by Microsoft and include the same service-level agreements as other Azure-hosted models.

Grok gained attention for its unfiltered and provocative tone, marketed by Musk as a more candid alternative to mainstream AI.

Unlike ChatGPT, it has been known to use vulgar language and provide responses on sensitive topics that other models typically avoid.

However, the AI has stirred criticism, particularly over troubling behaviour such as undressing women in photos and referencing conspiracy theories. Incidents of censorship and offensive content have raised concerns about its deployment on Musk’s platform X.

Instead of replicating that experience, Microsoft is offering a more controlled version of Grok within Azure. These versions include stricter content controls, enhanced data integration, and improved governance tools, distinguishing them from the models directly available through xAI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Can AI replace therapists?

With mental health waitlists at record highs and many struggling to access affordable therapy, some are turning to AI chatbots for support.

Kelly, who waited months for NHS therapy, found solace in character.ai bots, describing them as always available, judgment-free companions. ‘It was like a cheerleader,’ she says, noting how bots helped her cope with anxiety and heartbreak.

But despite emotional benefits for some, AI chatbots are not without serious risks. Character.ai is facing a lawsuit from the mother of a 14-year-old who died by suicide after reportedly forming a harmful relationship with an AI character.

Other bots, like one from the National Eating Disorder Association, were shut down after giving dangerous advice.

Even so, demand is high. In April 2024 alone, 426,000 mental health referrals were made in England, and over a million people are still waiting for care. Apps like Wysa, used by 30 NHS services, aim to fill the gap by offering CBT-based self-help tools and crisis support.

Experts warn, however, that chatbots lack context, emotional intuition, and safeguarding. Professor Hamed Haddadi calls them ‘inexperienced therapists’ that may agree too easily or misunderstand users.

Ethicists like Dr Paula Boddington point to bias and cultural gaps in the AI training data. And privacy is a looming concern: ‘You’re not entirely sure how your data is being used,’ says psychologist Ian MacRae.

Still, users like Nicholas, who lives with autism and depression, say AI has helped when no one else was available. ‘It was so empathetic,’ he recalls, describing how Wysa comforted him during a night of crisis.

A Dartmouth study found AI users saw a 51% drop in depressive symptoms, but even its authors stress bots can’t replace human therapists. Most experts agree AI tools may serve as temporary relief or early intervention—but not as long-term substitutes.

As John, another user, puts it: ‘It’s a stopgap. When nothing else is there, you clutch at straws.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google releases NotebookLM app early

Google has launched its AI-powered research assistant, NotebookLM, on Android and iOS a day earlier than expected and just ahead of its annual I/O developer conference.

Until now, the service was only available on desktop, but users can now access its full features while on the move.

NotebookLM helps users understand complex content by offering intelligent summaries and allowing them to ask questions directly about their documents.

A standout feature, Audio Overviews, creates AI-generated podcast-style summaries from uploaded materials and supports offline listening and background playback.

Mobile users can now create and manage notebooks directly from their devices. Instead of limiting content sources, the app enables users to add websites, PDFs, or YouTube videos by simply tapping the share icon and selecting NotebookLM.

It also offers easy access to previously added sources and adapts its appearance to match the device’s light or dark mode settings.

With the release timed just before Google’s keynote, it’s likely the company will highlight NotebookLM’s capabilities further during the I/O 2025 presentation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lords reject UK AI copyright bill again

The UK government has suffered a second defeat in the House of Lords over its Data (Use and Access) Bill, as peers once again backed a copyright-focused amendment aimed at protecting artists from AI content scraping.

Baroness Kidron, a filmmaker and digital rights advocate, led the charge, accusing ministers of listening to the ‘sweet whisperings of Silicon Valley’ and allowing tech firms to ‘redefine theft’ by exploiting copyrighted material without permission.

Her amendment would force AI companies to disclose their training data sources and obtain consent from rights holders.

The government had previously rejected this amendment, arguing it would lead to ‘piecemeal’ legislation and pre-empt ongoing consultations.

But Kidron’s position was strongly supported across party lines, with peers calling the current AI practices ‘burglary’ and warning of catastrophic damage to the UK’s creative sector.

High-profile artists like Sir Elton John, Paul McCartney, Annie Lennox, and Kate Bush have condemned the government’s stance, with Sir Elton branding ministers ‘losers’ and accusing them of enabling theft.

Peers from Labour, the Lib Dems, the Conservatives, and the crossbenches united to defend UK copyright law, calling the government’s actions a betrayal of the country’s leadership in intellectual property rights.

Labour’s Lord Brennan warned against a ‘double standard’ for AI firms, while Lord Berkeley insisted immediate action was needed to prevent long-term harm.

Technology Minister Baroness Jones countered that no country has resolved the AI-copyright dilemma and warned that the amendment would only create more regulatory confusion.

Nonetheless, peers voted overwhelmingly in favour of Kidron’s proposal—287 to 118—sending the bill back to the Commons with a strengthened demand for transparency and copyright safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Elton John threatens legal fight over AI use

Sir Elton John has lashed out at the UK government over plans that could allow AI companies to use copyrighted content without paying artists, calling ministers ‘absolute losers’ and accusing them of ‘thievery on a high scale.’

He warned that younger musicians, without the means to challenge tech giants, would be most at risk if the proposed changes go ahead.

The row centres on a rejected House of Lords amendment to the Data Bill, which would have required AI firms to disclose what material they use.

Despite a strong majority in favour in the Lords, the Commons blocked the move, meaning the bill will keep bouncing between the two chambers until a compromise is reached.

Sir Elton, joined by playwright James Graham, said the government was failing to defend creators and seemed more interested in appeasing powerful tech firms.

More than 400 artists, including Sir Paul McCartney, have signed a letter urging Prime Minister Sir Keir Starmer to strengthen copyright protections instead of allowing AI to mine their work unchecked.

While the government insists no changes will be made unless they benefit creators, critics say the current approach risks sacrificing the UK’s music industry for Silicon Valley’s gain.

Sir Elton has threatened legal action if the plans go ahead, saying, ‘We’ll fight it all the way.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US bans nonconsensual explicit deepfakes nationwide

The US is introducing a landmark federal law aimed at curbing the spread of non-consensual explicit deepfake images, following mounting public outrage.

President Donald Trump is expected to sign the Take It Down Act, which will criminalise the sharing of explicit images, whether real or AI-generated, without consent. The law will also require tech platforms to remove such content within 48 hours of notification, instead of leaving the matter to patchy state laws.

The legislation is one of the first at the federal level to directly tackle the misuse of AI-generated content. It builds on earlier laws that protected children but had left adults vulnerable due to inconsistent state regulations.

The bill received rare bipartisan support in Congress and was backed by over 100 organisations, including tech giants like Meta, TikTok and Google. First Lady Melania Trump also supported the act, hosting a teenage victim of deepfake harassment during the president’s address to Congress.

The act was prompted in part by incidents like that of Elliston Berry, a Texas high school student targeted by a classmate who used AI to alter her social media image into a nude photo. Similar cases involving teen girls across the country highlighted the urgency for action.

Tech companies had already started offering tools to remove explicit images, but the lack of consistent enforcement allowed harmful content to persist on less cooperative platforms.

Supporters of the law argue it sends a strong societal message instead of allowing the exploitation to continue unchallenged.

Advocates like Imran Ahmed and Ilana Beller emphasised that while no law is a perfect solution, this one forces platforms to take real responsibility and offers victims some much-needed protection and peace of mind.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Grok AI glitch reignites debate on trust and safety in AI tools

Elon Musk’s AI chatbot, Grok, has caused a stir by injecting unsolicited claims about ‘white genocide’ in South Africa into unrelated user queries. These remarks, widely regarded as part of a debunked conspiracy theory, appeared across various innocuous prompts before being quickly removed.

The strange behaviour led to speculation that Grok’s system prompt had been tampered with, possibly by someone inside xAI. Although Grok briefly claimed it had been instructed to mention the topic, xAI has yet to issue a full technical explanation.

Rival AI leaders, including OpenAI’s Sam Altman, joined public criticism on X, calling the episode a concerning sign of possible editorial manipulation. While Grok’s responses returned to normal within hours, the incident reignited concerns about control and transparency in large AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Deepfake voice scams target US officials in phishing surge

Hackers are using deepfake voice and video technology to impersonate senior US government officials and high-profile tech figures in sophisticated phishing campaigns designed to steal sensitive data, the FBI has warned.

Since April, cybercriminals have been contacting current and former federal and state officials through fake voice messages and text messages claiming to be from trusted sources.

The scammers attempt to establish rapport and then direct victims to malicious websites to extract passwords and other private information.

The FBI cautions that if hackers compromise one official’s account, they may use that access to impersonate them further and target others in their network.

The agency urges individuals to verify identities, avoid unsolicited links, and enable multifactor authentication to protect sensitive accounts.

Separately, Polygon co-founder Sandeep Nailwal reported a deepfake scam in which bad actors impersonated him and colleagues via Zoom, urging crypto users to install malicious scripts. He described the attack as ‘horrifying’ and noted the difficulty of reporting such incidents to platforms like Telegram.

The FBI and cybersecurity experts recommend examining media for visual inconsistencies, avoiding software downloads during unverified calls, and never sharing credentials or wallet access unless certain of the source’s legitimacy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!