Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creative industries raise concerns over the EU AI Act

Organisations representing creative sectors have issued a joint statement expressing concerns over the current implementation of the EU AI Act, particularly its provisions for general-purpose AI systems.

The response focuses on recent documents, including the General Purpose AI Code of Practice, accompanying guidelines, and the template for training data disclosure under Article 53.

The signatories, drawn from music and broader creative industries, said they had engaged extensively throughout the consultation process. They now argue that the outcomes do not fully reflect the issues raised during those discussions.

According to the statement, the result does not provide the level of intellectual property protection that some had expected from the regulation.

The group has called on the European Commission to reconsider the implementation package and is encouraging the European Parliament and member states to review the process.

The original EU AI Act was widely acknowledged as a landmark regulation, with technology firms and creative industries closely watching its rollout across member countries.

Google confirmed that it will sign the General Purpose Code of Practice elsewhere. The company said the latest version supports Europe’s broader innovation goals more effectively than earlier drafts, but it also noted ongoing concerns.

These include the potential impact of specific requirements on competitiveness and handling trade secrets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US federal appeals court renews scrutiny in child exploitation suit against Musk’s X

A federal appeals court in San Francisco has reinstated critical parts of a lawsuit against Elon Musk’s social media platform X, previously known as Twitter, regarding child exploitation content. 

While recognising that X holds significant legal protections against liability for content posted by users, the 9th Circuit panel determined that the platform must address allegations of negligence stemming from delays in reporting explicit material involving minors to authorities.

The troubling case revolves around two minors who were tricked via SnapChat into providing explicit images, which were later compiled and widely disseminated on Twitter. 

Despite being alerted to the content, Twitter reportedly took nine days to remove it and notify the National Center for Missing and Exploited Children, during which the disturbing video received over 167,000 views. 

The court emphasised that once the platform was informed, it had a clear responsibility to act swiftly, separating this obligation from typical protections granted by the Communications Decency Act.

The ruling additionally criticised X for having an infrastructure that allegedly impeded users’ ability to report child exploitation effectively. 

However, the court upheld the dismissal of other claims, including allegations that Twitter knowingly benefited from sex trafficking or deliberately amplified illicit content. 

Advocates for the victims welcomed the decision as a step toward accountability, setting the stage for further legal scrutiny and potential trial proceedings.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK Online Safety Act under fire amid free speech and privacy concerns

The UK’s Online Safety Act, aimed at protecting children and eliminating illegal content online, is stirring a strong debate due to its stringent requirements on social media platforms and websites hosting adult content.

Critics argue that the act’s broad application could unintentionally suppress free speech, as highlighted by social media platform X.

X claims the act results in the censorship of lawful content, reflecting concerns shared by politicians, free-speech campaigners, and content creators.

Moreover, public unease is evident, with over 468,000 individuals signing a petition for the act’s repeal, citing privacy concerns over mandatory age checks requiring personal data on adult content sites.

Despite mounting criticism, the UK government is resolute in its commitment to the legislation. Technology Secretary Peter Kyle equates opposition to siding with online predators, emphasising child protection.

The government asserts that the act also mandates platforms to uphold freedom of expression alongside child safety obligations.

While X criticises both the broad scope and the tight compliance timelines of the act, warning of pressures towards over-censorship, it calls for significant statutory revisions to protect personal freedoms while safeguarding children.

The government rebuffs claims that the Online Safety Act compromises free speech, with assurances that the law equally protects freedom of expression.

Meanwhile, Ofcom, the UK’s communications regulator, has initiated investigations into the compliance of several companies managing pornography sites, highlighting the rigorous enforcement.

Source: Reuters

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Rod Stewart honours Ozzy Osbourne with AI fantasy

At a recent Atlanta concert, Rod Stewart honoured the late Ozzy Osbourne in a strikingly unconventional way, by showing an AI-generated video of Ozzy taking selfies in heaven with late music icons. The tribute played on a giant screen behind Stewart as he performed ‘Forever Young,’ depicting a cartoonish Ozzy grinning alongside legends like Kurt Cobain, Prince, Michael Jackson, and Bob Marley, all united by a floating selfie stick among the clouds.

The video, originally captured by a concertgoer on TikTok, featured Ozzy smiling and posing with other departed stars like Tina Turner and Freddie Mercury, turning heaven into an eternal celebrity photo op. Instead of a traditional photo montage, Stewart’s new approach created a digital afterlife where jam sessions and selfies with rock’s finest never end, implying perhaps that Ozzy has already joined them.

That marks a notable shift from Stewart’s earlier tributes to Osbourne, which relied on simple archival photographs. The AI animation, however strange, seems to reflect a deeper attempt to celebrate Ozzy’s spirit in a uniquely modern way, courtesy, presumably, of a tech-savvy relative.

Following Ozzy’s death on 22 July, Stewart shared a heartfelt farewell on Instagram: ‘Bye, Ozzy. Sleep well, my friend. I’ll see you up there, later rather than sooner.’ Judging by this tribute, he’s already imagining what that reunion might look like.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Microsoft study flags 40 jobs highly vulnerable to AI automation

Microsoft Research released a comprehensive AI impact assessment, ranking 80 occupations by exposure to generative AI tools such as Copilot and ChatGPT. Roles heavily involved in language, writing, client communication, and routine digital tasks showed the highest AI overlap. Notable examples include translators, historians, customer service agents, political scientists, and data scientists.

By contrast, jobs requiring hands-on work, empathy, real-time physical or emotional engagement, such as nurses, phlebotomists, construction trades, embalmers, and housekeeping staff, were classified as low risk under current AI capabilities. Experts suggest that these kinds of positions remain essential because they involve physical presence, human interaction, and complex real-time decision making.

Although certain professions scored high for AI exposure, Microsoft and independent analysts emphasise that most jobs won’t disappear entirely. Instead, generative AI tools are expected to augment workflows, creating hybrid roles where human judgement and oversight remain critical, especially in sectors such as financial services, healthcare, and creative industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI pulls searchable chats from ChatGPT

OpenAI has removed a feature that allowed users to make their ChatGPT conversations publicly searchable, following backlash over accidental exposure of sensitive content.

Dane Stuckey, OpenAI’s CISO, confirmed the rollback on Thursday, describing it as a short-lived experiment meant to help users find helpful conversations. However, he acknowledged that the feature posed privacy risks.

‘Ultimately, we think this feature introduced too many opportunities for folks to accidentally share things they didn’t intend to,’ Stuckey wrote in a post on X. He added that OpenAI is working to remove any indexed content from search engines.

The move came swiftly after Fast Company and privacy advocate Luiza Jarovsky reported that some shared conversations were appearing in Google search results.

Jarovsky posted examples on X, noting that even though the chats were anonymised, users were unknowingly revealing personal experiences, including harassment and mental health struggles.

To activate the feature, users had to tick a box allowing their chat to be discoverable. While the process required active steps, critics warned that some users might opt in without fully understanding the consequences. Stuckey said the rollback will be complete by Friday morning.

The incident adds to growing concerns around AI and user privacy, particularly as conversational platforms like ChatGPT become more embedded in everyday life.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK universities urged to act fast on AI teaching

UK universities risk losing their competitive edge unless they adopt a clear, forward-looking approach to ΑΙ in teaching. Falling enrolments, limited funding, and outdated digital systems have exposed a lack of AI literacy across many institutions.

As AI skills become essential for today’s workforce, employers increasingly expect graduates to be confident users rather than passive observers.

Many universities continue relying on legacy technology rather than exploring the full potential of modern learning platforms. AI tools can enhance teaching by adapting to individual student needs and helping educators identify learning gaps.

However, few staff have received adequate training, and many universities lack the resources or structure to embed AI into day-to-day teaching effectively.

To close the growing gap between education and the workplace, universities must explore flexible short courses and microcredentials that develop workplace-ready skills.

Introducing ethical standards and data transparency from the start will ensure AI is used responsibly without weakening academic integrity.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

As Meta AI grows smarter on its own, critics warn of regulatory gaps

While OpenAI’s ChatGPT and Google’s Gemini dominate headlines, Meta’s AI is making quieter, but arguably more unsettling, progress. According to CEO Mark Zuckerberg, Meta’s AI is advancing rapidly and, crucially, learning to improve without external input.

In a blog post titled ‘Personal Superintelligence’, Zuckerberg claimed that Meta AI is becoming increasingly powerful through self-directed development. While he described current gains as modest, he emphasised that the trend is both real and significant.

Zuckerberg framed this as part of a broader mission to build AI that acts as a ‘personal superintelligence’, a tool that empowers individuals and becomes widely accessible. However, critics argue this narrative masks a deeper concern: AI systems that can evolve autonomously, outside human guidance or scrutiny.

The concept of self-improving AI is not new. Researchers have previously built systems capable of learning from other models or user interactions. What’s different now is the speed, scale and opacity of these developments, particularly within big tech companies operating with minimal public oversight.

The progress comes amid weak regulation. While governments like the Biden administration have issued AI action plans, experts say they lack the strength to keep up. Meanwhile, AI is rapidly spreading across everyday services, from healthcare and education to biometric verification.

Recent examples include Google’s behavioural age-estimation tools for teens, illustrating how AI is already making high-stakes decisions. As AI systems become more capable, questions arise: How much data will they access? Who controls them? And can the public meaningfully influence their design?

Zuckerberg struck an optimistic tone, framing Meta’s AI as democratic and empowering. However, that may obscure the risks of AI outpacing oversight, as some tech leaders warn of existential threats while others focus on commercial gains.

The lack of transparency worsens the problem. If Meta’s AI is already showing signs of self-improvement, are similar developments happening in other frontier models, such as GPT or Gemini? Without independent oversight, the public has no clear way to know—and even less ability to intervene.

Until enforceable global regulations are in place, society is left to trust that private firms will self-regulate, even as they compete in a high-stakes race for dominance. That’s a risky gamble when the technology itself is changing faster than we can respond.

As Meta AI evolves with little fanfare, the silence may be more ominous than reassuring. AI’s future may arrive before we are prepared to manage its consequences, and by then, it might be too late to shape it on our terms.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU AI Act oversight and fines begin this August

A new phase of the EU AI Act takes effect on 2 August, requiring member states to appoint oversight authorities and enforce penalties. While the legislation has been in force for a year, this marks the beginning of real scrutiny for AI providers across Europe.

Under the new provisions, countries must notify the European Commission of which market surveillance authorities will monitor compliance. But many are expected to miss the deadline. Experts warn that without well-resourced and competent regulators, the risks to rights and safety could grow.

The complexity is significant. Member states must align enforcement with other regulations, such as the GDPR and Digital Services Act, raising concerns regarding legal fragmentation and inconsistent application. Some fear a repeat of the patchy enforcement seen under data protection laws.

Companies that violate the EU AI Act could face fines of up to €35 million or 7% of global turnover. Smaller firms may face reduced penalties, but enforcement will vary by country.

Rules regarding general-purpose AI models such as ChatGPT, Gemini, and Grok also take effect. A voluntary Code of Practice introduced in July aims to guide compliance, but only some firms, such as Google and OpenAI, have agreed to sign. Meta has refused, arguing the rules stifle innovation.

Existing AI tools have until 2027 to comply fully, but any launched after 2 August must meet the new requirements immediately. With implementation now underway, the AI Act is shifting from legislation to enforcement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!