Cyber attack hits Lee Enterprises staff data

Thousands of current and former employees at Lee Enterprises have had their data exposed following a cyberattack earlier this year.

Hackers accessed to the company’s systems in early February, compromising sensitive information such as names and Social Security numbers before the breach was contained the same day.

Although the media firm, which operates over 70 newspapers across 26 US states, swiftly secured its networks, a three-month investigation involving external cybersecurity experts revealed that attackers accessed databases containing employee details.

The breach potentially affects around 40,000 individuals — far more than the company’s 4,500 current staff — indicating that past employees were also impacted.

The stolen data could be used for identity theft, fraud or phishing attempts. Criminals may even impersonate affected employees to infiltrate deeper into company systems and extract more valuable information.

Lee Enterprises has notified those impacted and filed relevant disclosures with authorities, including the Maine Attorney General’s Office.

Headquartered in Iowa, Lee Enterprises draws over 200 million monthly online page views and generated over $611 million in revenue in 2024. The incident underscores the ongoing vulnerability of media organisations to cyber threats, especially when personal employee data is involved.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Eminem sues Meta over copyright violations

Eminem has filed a major lawsuit against Meta, accusing the tech giant of knowingly enabling widespread copyright infringement across its platforms. The rapper’s publishing company, Eight Mile Style, is seeking £80.6 million in damages, claiming 243 of his songs were used without authorisation.

The lawsuit argues that Meta, which owns Facebook, Instagram and WhatsApp, allowed tools such as Original Audio and Reels to encourage unauthorised reproduction and use of Eminem’s music.

The filing claims it occurred without proper licensing or attribution, significantly diminishing the value of his copyrights.

Eminem’s legal team contends that Meta profited from the infringement instead of ensuring his works were protected. If a settlement cannot be reached, the artist is demanding the maximum statutory damages — $150,000 per song — which would amount to over $109 million.

Meta has faced similar lawsuits before, including a high-profile case in 2022 brought by Epidemic Sound, which alleged the unauthorised use of thousands of its tracks. The latest claim adds to growing pressure on social media platforms to address copyright violations more effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI turns ChatGPT into AI gateway

OpenAI plans to reinvent ChatGPT as an all-in-one ‘super assistant’ that knows its users and becomes their primary gateway to the internet.

Details emerged from a partly redacted internal strategy document shared during the US government’s antitrust case against Google.

Rather than limiting ChatGPT to existing apps and websites, OpenAI envisions a future where the assistant supports everyday life—from suggesting recipes at home to taking notes at work or guiding users while travelling.

The company says the AI should evolve into a reliable, emotionally intelligent helper capable of handling a various personal and professional tasks.

OpenAI also believes hardware will be key to this transformation. It recently acquired io, a start-up founded by former Apple designer Jony Ive, for $6.4 billion to develop AI-powered devices.

The company’s strategy outlines how upcoming models like o2 and o3, alongside tools like multimodality and generative user interfaces, could make ChatGPT capable of taking meaningful action instead of simply offering responses.

The document also reveals OpenAI’s intention to back a regulation requiring tech platforms to allow users to set ChatGPT as their default assistant. Confident in its fast growth, research lead, and independence from ads, the company aims to maintain its advantage through bold decisions, speed, and self-disruption.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NSO asks court to overturn WhatsApp verdict

Israeli spyware company NSO Group has requested a new trial after a US jury ordered it to pay $168 million in damages to WhatsApp.

The company, which has faced mounting legal and financial troubles, filed a motion in a California federal court last week seeking to reduce the verdict or secure a retrial.

The May verdict awarded WhatsApp $444,719 in compensatory damages and $167.25 million in punitive damages. Jurors found that NSO exploited vulnerabilities in the encrypted platform and sold the exploit to clients who allegedly used it to target journalists, activists and political rivals.

WhatsApp, owned by Meta, filed the lawsuit in 2019.

NSO claims the punitive award is unconstitutional, arguing it is over 376 times greater than the compensatory damages and far exceeds the US Supreme Court’s general guidance of a 4:1 ratio.

The firm also said it cannot afford the penalty, citing losses of $9 million in 2023 and $12 million in 2024. Its CEO testified that the company is ‘struggling to keep our heads above water’.

WhatsApp, responding to TechCrunch in a statement, said NSO was once again trying to evade accountability. The company vowed to continue its legal campaign, including efforts to secure a permanent injunction that would prevent NSO from ever targeting WhatsApp or its users again.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Courts consider limits on AI evidence

A newly proposed rule by the Federal Judicial Conference could reshape how AI-generated evidence is treated in court. Dubbed Rule 707, it would allow such machine-generated evidence to be admitted only if it meets the same reliability standards required of expert testimony under Rule 702.

However, it would not apply to outputs from simple scientific instruments or widely used commercial software. The rule aims to address concerns about the reliability and transparency of AI-driven analysis, especially when used without a supporting expert witness.

Critics argue that the limitation to non-expert presentation renders the rule overly narrow, as the underlying risks of bias and interpretability persist regardless of whether an expert is involved. They suggest that all machine-generated evidence in US courts should be subject to robust scrutiny.

The Advisory Committee is also considering the scope of terminology such as ‘machine learning’ to prevent Rule 707 from encompassing more than intended. Meanwhile, a separate proposed rule regarding deepfakes has been shelved because courts already have tools to address the forgery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta faces backlash over open source AI claims

Meta is under renewed scrutiny for what critics describe as ‘open washing’ after sponsoring a Linux Foundation whitepaper on the benefits of open source AI.

The paper highlights how open models help reduce enterprise costs—claiming companies using proprietary AI tools spend over three times more. However, Meta’s involvement has raised questions, as its Llama AI models are presented as open source despite industry experts insisting otherwise.

Amanda Brock, head of OpenUK, argues that Llama does not meet accepted definitions of open source due to licensing terms that restrict commercial use.

She referenced the Open Source Initiative’s (OSI) standards, which Llama fails to meet, pointing to the presence of commercial limitations that contradict open source principles. Brock noted that open source should allow unrestricted use, which Llama’s license does not support.

Meta has long branded its Llama models as open source, but the OSI and other stakeholders have repeatedly pushed back, stating that the company’s licensing undermines the very foundation of open access.

While Brock acknowledged Meta’s contribution to the broader open source conversation, she also warned that such mislabelling could have serious consequences—especially as lawmakers and regulators increasingly reference open source in crafting AI legislation.

Other firms have faced similar allegations, including Databricks with its DBRX model in 2024, which was also criticised for failing to meet OSI standards. As the AI sector continues to evolve, the line between truly open and merely accessible models remains a point of growing tension.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU says US tech firms censor more

Far more online content is removed under US tech firms’ terms and conditions than under the EU’s Digital Services Act (DSA), according to Tech Commissioner Henna Virkkunen.

Her comments respond to criticism from American tech leaders, including Elon Musk, who have labelled the DSA a threat to free speech.

In an interview with Euractiv, Virkkunen said recent data show that 99% of content removals in the EU between September 2023 and April 2024 were carried out by platforms like Meta and X based on their own rules, not due to EU regulation.

Only 1% of cases involved ‘trusted flaggers’ — vetted organisations that report illegal content to national authorities. Just 0.001% of those reports led to an actual takedown decision by authorities, she added.

The DSA’s transparency rules made those figures available. ‘Often in the US, platforms have more strict rules with content,’ Virkkunen noted.

She gave examples such as discussions about euthanasia and nude artworks, which are often removed under US platform policies but remain online under European guidelines.

Virkkunen recently met with US tech CEOs and lawmakers, including Republican Congressman Jim Jordan, a prominent critic of the DSA and the DMA.

She said the data helped clarify how EU rules actually work. ‘It is important always to underline that the DSA only applies in the European territory,’ she said.

While pushing back against American criticism, Virkkunen avoided direct attacks on individuals like Elon Musk or Mark Zuckerberg. She suggested platform resistance reflects business models and service design choices.

Asked about delays in final decisions under the DSA — including open cases against Meta and X — Virkkunen stressed the need for a strong legal basis before enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Mode reshapes Google’s search results

One year after launching AI-generated search results via AI Overviews, Google has unveiled AI Mode—a new feature it claims will redefine online search.

Functioning as an integrated chatbot, AI Mode allows users to ask complex questions, receive detailed responses, and continue with follow-up queries, eliminating the need to click through traditional links.

Google’s CEO Sundar Pichai described it as a ‘total reimagining of search,’ noting significant changes in user behaviour during early trials.

Analysts suggest the company is attempting to disrupt its own search business before rivals do, following internal concerns sparked by the rise of tools like ChatGPT.

With AI Mode, Google is increasingly shifting from directing users to websites toward delivering instant answers itself. Critics fear it could dramatically reduce web traffic for publishers who depend on Google for visibility and revenue.

While Google insists the open web will continue to grow, many publishers remain unconvinced. The News/Media Alliance condemned the move, calling it theft of content without fair return.

Links were the last mechanism providing meaningful traffic, said CEO Danielle Coffey, who urged the US Department of Justice to take action against what she described as monopolistic behaviour.

Meanwhile, Google is rapidly integrating AI across its ecosystem. Alongside AI Mode, it introduced developments in its Gemini model, with the aim of building a ‘world model’ capable of simulating and planning like the human brain.

Google DeepMind’s Demis Hassabis said the goal is to lay the foundations for an AI-native operating system.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EMSA given broader powers for digital maritime threats

The European Maritime Safety Agency (EMSA) is set to take on an expanded role in maritime security, following a provisional agreement between the European Parliament and the Council.

Instead of focusing solely on traditional safety tasks, EMSA will now help tackle modern challenges, including cyber attacks and hybrid threats that increasingly target critical maritime infrastructure across Europe.

The updated mandate enables EMSA to support EU member states and the European Commission with technical, operational and scientific assistance in areas such as cybersecurity, pollution response, maritime surveillance and decarbonisation.

Rather than remaining confined to its original scope, the agency may also adopt new responsibilities as risks evolve, provided such tasks are requested by the Commission or individual countries.

The move forms part of a broader EU legislative package aimed at reinforcing maritime safety rules, improving environmental protections and updating inspection procedures.

The reforms ensure EMSA is equipped with adequate human and financial resources to handle its wider remit and contribute to strategic resilience in an increasingly digital and geopolitically unstable world.

Created in 2002 and based in Lisbon, EMSA plays a central role in safeguarding maritime transport, which remains vital for Europe’s economy and trade.

With more than 2,000 marine incidents reported annually, the agency’s modernised mandate is expected to strengthen the EU’s ability to prevent disruptions at sea and support its broader green and security goals.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

The rise of AI in Hollywood, gaming, and music

It feels like just yesterday that the internet was buzzing over the first renditions of OpenAI’s DALL·E tool, with millions competing to craft the funniest, weirdest prompts and sharing the results across social media. The sentiment was clear: the public was fascinated by the creative potential of this new technology.

But beneath the laughter and viral memes was a quieter, more uneasy question: what happens when AI not only generates quirky artwork, but begins to reshape our daily lives—both online and off? As it turns out, that process was already underway behind the scenes—and we were none the wiser.

AI in action: How the entertainment industry is using it today

Three years later, we have reached a point where AI’s influence seems to have passed the point of no return. The entertainment industry was among the first to embrace this technology, and starting with the 2025 Academy Awards, films that incorporate AI are now eligible for Oscar nominations.

That decision has been met with mixed reactions, to put it lightly. While some have praised the industry’s eagerness to explore new technological frontiers, others have claimed that AI greatly diminishes the human contribution to the art of filmmaking and therefore takes away the essence of the seventh art form.

The first wave of AI-enhanced storytelling

One recent example is the film The Brutalist, in which AI was used to refine Adrien Brody’s Hungarian dialogue to sound more authentic—a move that sparked both technical admiration and creative scepticism.

With AI now embedded in everything from voiceovers to entire digital actors, we are only beginning to confront what it truly means when creativity is no longer exclusively human.

Academy Awards 2025, Adrien Brody, The Brutalist, The Oscars, Best Actor
Adrien Brody’s Hungarian dialogue in ‘The Brutalist’ was subject to generative AI to make it sound more authentic. Screenshot / YouTube/ Oscars

Setting the stage: AI in the spotlight

The first major big-screen resurrection occurred in 1994’s The Crow, where Brandon Lee’s sudden passing mid-production forced the studio to rely on body doubles, digital effects, and existing footage to complete his scenes. However, it was not until 2016 that audiences witnessed the first fully digital revival.

In Rogue One: A Star Wars Story, Peter Cushing’s character was brought back to life using a combination of CGI, motion capture, and a facial stand-in. Although primarily reliant on traditional VFX, the project paved the way for future use of deepfakes and AI-assisted performance recreation across movies, TV shows, and video games.

Afterward, some speculated that studios tied to Peter Cushing’s legacy—such as Tyburn Film Productions—could pursue legal action against Disney for reviving his likeness without direct approval. While no lawsuit was filed, questions were raised about who owns a performer’s digital identity after death.

The digital Jedi: How AI helped recreate Luke Skywalker

Fate would have it that AI’s grand debut would take place in a galaxy far, far away—with the surprise appearance of Luke Skywalker in the Season 2 finale of The Mandalorian (spoiler alert). The moment thrilled fans and marked a turning point for the franchise—but it was more than just fan service.

Here’s the twist: Mark Hamill did not record any new voice lines. Instead, actor Max Lloyd-Jones performed the physical role, while Hamill’s de-aged voice was recreated with the help of Respeecher, a Ukrainian company specialising in AI-driven speech synthesis.

Impressed by their work, Disney turned to Respeecher once again—this time to recreate James Earl Jones’s iconic Darth Vader voice for the Obi-Wan Kenobi miniseries. Using archival recordings that Jones signed over for AI use, the system synthesised new dialogue that perfectly matched the intonation and timbre of his original trilogy performances.

Darth Vader, James Earl Jones, Star Wars, Obi-Wan Kenobi, Respeecher, AI voice synthesizer
Screenshot / YouTube / Star Wars

AI in moviemaking: Preserving legacy or crossing a line?

The use of AI to preserve and extend the voices of legendary actors has been met with a mix of admiration and unease. While many have praised the seamless execution and respect shown toward the legacy of both Hamill and Jones, others have raised concerns about consent, creative authenticity, and the long-term implications of allowing AI to perform in place of humans.

In both cases, the actors were directly involved or gave explicit approval, but these high-profile examples may be setting a precedent for a future where that level of control is not guaranteed.

A notable case that drew backlash was the planned use of a fully CGI-generated James Dean in the unreleased film Finding Jack, decades after his death. Critics and fellow actors have voiced strong opposition, arguing that bringing back a performer without their consent reduces them to a brand or asset, rather than honouring them as an artist.

AI in Hollywood: Actors made redundant?

What further heightened concerns among working actors was the launch of Promise, a new Hollywood studio built entirely around generative AI. Backed by wealthy investors, Promise is betting big on Muse—a GenAI tool designed to produce high-quality films and TV series at a fraction of the cost and time required for traditional Hollywood productions.

Filmmaking is a business, after all—and with production budgets ballooning year after year, AI-powered entertainment sounds like a dream come true for profit-driven studios.

Meta’s recent collaboration with Blumhouse Productions on Movie Gen only adds fuel to the fire, signalling that major players are eager to explore a future where storytelling may be driven as much by algorithms as by authentic artistry.

AI in gaming: Automation or artistic collapse?

Speaking of entertainment businesses, we cannot ignore the world’s most popular entertainment medium: gaming. While the pandemic triggered a massive boom in game development and player engagement, the momentum was short-lived.

As profits began to slump in the years that followed, the industry was hit by a wave of layoffs, prompting widespread internal restructuring and forcing publishers to rethink their business models entirely. In hopes of cost-cutting, AAA companies had their eye on AI as their one saving grace.

Nvidia developing AI chips, along with Ubisoft and EA investing in AI and machine learning, have sent clear signals to the industry: automation is no longer just a backend tool—it is a front-facing strategy.

With AI-assisted NPC behaviour and AI voice acting, game development is shifting toward faster, cheaper, and potentially less human-driven production. In response, game developers have become concerned about their future in the industry, and actors are less inclined to sign off their rights for future projects.

AI voice acting in video games

In an attempt to compete with wealthier studios, even indie developers have turned to GenAI to replicate the voices of celebrity voice actors. Tools like ElevenLabs and Altered Studio offer a seemingly straightforward way to get high-quality talent—but if only it were that simple.

Copyright laws and concerns over authenticity remain two of the strongest barriers to the widespread adoption of AI-generated voices—especially as many consumers still view the technology as a crutch rather than a creative tool for game developers.

The legal landscape around AI-generated voices remains murky. In many places, the rights to a person’s voice—or its synthetic clone—are poorly defined, creating loopholes developers can exploit.

AI voice cloning challenges legal boundaries in gaming

The legal ambiguity has fuelled a backlash from voice actors, who argue that their performances are being mimicked without consent or pay. SAG-AFTRA and others began pushing for tighter legal protections in 2023.

A notable flashpoint came in 2025, when Epic Games faced criticism for using an AI-generated Darth Vader voice in Fortnite. SAG-AFTRA filed a formal complaint, citing licensing concerns and a lack of actor involvement.

Not all uses have been controversial. CD Projekt Red recreated the voice of the late Miłogost Reczek in Cyberpunk 2077: Phantom Liberty—with his family’s blessing—setting a respectful precedent for the ethical use of AI.

How AI is changing music production and artist Identity

AI is rapidly reshaping music production, with a recent survey showing that nearly 25% of producers are already integrating AI tools into their creative workflows. This shift reflects a growing trend in how technology is influencing composition, mixing, and even vocal performance.

Artists like Imogen Heap are embracing the change with projects like Mogen, an AI version of herself that can create music and interact with fans—blurring the line between human creativity and digital innovation.

Major labels are also experimenting: Universal Music has recently used AI to reimagine Brenda Lee’s 1958 classic in Spanish, preserving the spirit of the original while expanding its cultural reach.

AI and the future of entertainment

As AI becomes more embedded in entertainment, the line between innovation and exploitation grows thinner. What once felt like science fiction is now reshaping the way stories are told—and who gets to tell them.

Whether AI becomes a tool for creative expansion or a threat to human artistry will depend on how the industry and audiences choose to engage with it in the years ahead. As in any business, consumers vote with their wallets, and only time will tell whether AI and authenticity can truly go hand-in-hand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!