Google hit with EU complaint over AI Overviews

After a formal filing by the Independent Publishers Alliance, Google has faced an antitrust complaint in the European Union over its AI Overviews feature.

The group alleges that Google has been using web content without proper consent to power its AI-generated summaries, causing considerable harm to online publishers.

The complaint claims that publishers have lost traffic, readers and advertising revenue due to these summaries. It also argues that opting out of AI Overviews is not a real choice unless publishers are prepared to vanish entirely from Google’s search results.

AI Overviews were launched over a year ago and now appear at the top of many search queries, summarising information using AI. Although the tool has expanded rapidly, critics argue it drives users away from original publisher websites, especially news outlets.

Google has responded by stating its AI search tools allow users to ask more complex questions and help businesses and creators get discovered. The tech giant also insisted that web traffic patterns are influenced by many factors and warned against conclusions based on limited data.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

EU rejects delay for AI Act rollout

The EU has confirmed it will enforce its originally scheduled AI Act, despite growing calls from American and European tech firms to delay the rollout.

Major companies, including Alphabet, Meta, ASML and Mistral, have urged the European Commission to push back the timeline by several years, citing concerns over compliance costs.

Rejecting the pressure, a Commission spokesperson clarified there would be no pause or grace period. The legislation’s deadlines remain, with general-purpose AI rules taking effect this August and stricter requirements for high-risk systems following August 2026.

The AI Act represents the EU’s effort to regulate AI across various sectors, aiming to balance innovation and public safety. While tech giants argue that the rules are too demanding, the EU insists legal certainty is vital and the framework must move forward as planned.

The Commission intends to simplify the process later in the year, such as easing reporting demands for smaller businesses. Yet the core structure and deadlines of the AI Act will not be altered.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

BRICS calls for AI data regulations amid challenges with de-dollarisation

BRICS leaders in Rio de Janeiro have called for stricter global rules on how AI uses data, demanding fair compensation for content used without permission.

The group’s draft statement highlights growing frustration with tech giants using vast amounts of unlicensed content to train AI models.

Despite making progress on digital policy, BRICS once again stalled on a long-standing ambition to reduce reliance on the US dollar.

After a decade of talks, the bloc’s cross-border payments system remains in limbo. Member nations continue to debate infrastructure, governance and how to work around non-convertible currencies and sanctions.

China is moving independently, expanding the yuan’s international use and launching domestic currency futures.

Meanwhile, the rest of the bloc struggles with legal, financial and technical hurdles, leaving the dream of a unified alternative to the dollar on hold. Even a proposed New Investment Platform remains mired in internal disagreements.

In response to rising global debt concerns, BRICS introduced a Multilateral Guarantees Initiative within the New Development Bank. It aims to improve credit access across the Global South without needing new capital, especially for countries struggling to borrow in dollar-dominated markets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Council of Europe picks Jylo to power AI platform

The Council of Europe has chosen Jylo, a European enterprise AI provider, to support over 3,000 users across its organisation.

The decision followed a competitive selection process involving multiple AI vendors, with Jylo standing out for its regulatory compliance and platform adaptability.

As Europe’s leading human rights body, the Council aims to use AI responsibly to support its legal and policy work. Jylo’s platform will streamline document-based workflows and reduce administrative burdens, helping staff focus on critical democratic and legal missions.

Leaders from both Jylo and the Council praised the collaboration. Jylo CEO Shawn Curran said the partnership reflects shared values around regulatory compliance and innovation.

The Council’s CIO, John Hunter, described Jylo’s commitment to secure AI as a perfect fit for the institution’s evolving digital strategy.

Jylo’s AI Assistant and automation features are designed specifically for knowledge-driven organisations. The rollout is expected to strengthen the Council’s internal efficiency and reinforce Jylo’s standing as a trusted AI partner across the European public and legal sectors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Spotify hit by AI band hoax controversy

A band called The Velvet Sundown has gone viral on Spotify, gaining over 850,000 monthly listeners, yet almost nothing is known about the people behind it.

With no live performances, interviews, or social media presence for its supposed members, the group has fuelled growing speculation that both it and its music may be AI-generated.

The mystery deepened after Rolling Stone first reported that a spokesperson had admitted the tracks were made using an AI tool called Suno, only to later reveal the spokesperson himself was fake.

The band denies any connection to the individual, stating on Spotify that the account impersonating them on X is also false.

AI detection tools have added to the confusion. Rival platform Deezer flagged the music as ‘100% AI-generated’, although Spotify has remained silent.

While CEO Daniel Ek has said AI music isn’t banned from the platform, he expressed concerns about mimicking real artists.

The case has reignited industry fears over AI’s impact on musicians. Experts warn that public trust in online content is weakening.

Musicians and advocacy groups argue that AI is undercutting creativity by training on human-made songs without permission. As copyright battles continue, pressure is mounting for stronger government regulation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI bots are taking your meetings for you

AI-powered note takers are increasingly filling virtual meeting rooms, sometimes even outnumbering the humans present. Workers are now sending bots to listen, record, and summarise meetings they no longer feel the need to attend themselves.

Major platforms such as Zoom, Teams and Meet offer built-in AI transcription, while startups like Otter and Fathom provide bots that quietly join meetings or listen in through users’ devices. The tools raise new concerns about privacy, consent, and the erosion of human engagement.

Some workers worry that constant recording suppresses honest conversation and makes meetings feel performative. Others, including lawyers and business leaders, point out the legal grey zones created by using these bots without full consent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AliExpress agrees to binding EU rules on data and transparency

AliExpress has agreed to legally binding commitments with the European Commission to comply with the Digital Services Act (DSA). These cover six key areas, including recommender systems, advertising transparency, and researcher data access.

The announcement on 18 June marks only the second case where a major platform, following TikTok, has formally committed to specific changes under the DSA.

The platform promised greater transparency in its recommendation algorithms, user opt-out from personalisation, and clearer information on product rankings. It also committed to allowing researchers access to publicly available platform data through APIs and customised requests.

However, the lack of clear definitions around terms such as ‘systemic risk’ and ‘public data’ may limit practical oversight.

AliExpress has also established an internal monitoring team to ensure implementation of these commitments. Yet experts argue that without measurable benchmarks and external verification, internal monitoring may not be enough to guarantee meaningful compliance or accountability.

The Commission, meanwhile, is continuing its investigation into the platform’s role in the distribution of illegal products.

These commitments reflect the EU’s broader enforcement strategy under the DSA, aiming to establish transparency and accountability across digital platforms. The agreement is a positive start but highlights the need for stronger oversight and clearer definitions for lasting impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

TikTok struggles to stop the spread of hateful AI videos

Google’s Veo 3 video generator has enabled a new wave of racist AI content to spread across TikTok, despite both platforms having strict policies banning hate speech.

According to MediaMatters, several TikTok accounts have shared AI-generated videos promoting antisemitic and anti-Black stereotypes, many of which still circulated widely before being removed.

These short, highly realistic videos often included offensive depictions, and the visible ‘Veo’ watermark confirmed their origin from Google’s model.

While both TikTok and Google officially prohibit the creation and distribution of hateful material, enforcement has been patchy. TikTok claims to use both automated systems and human moderators, yet the overwhelming volume of uploads appears to have delayed action.

Although TikTok says it banned over half the accounts before MediaMatters’ findings were published, harmful videos still managed to reach large audiences.

Google also maintains a Prohibited Use Policy banning hate-driven content. However, Veo 3’s advanced realism and difficulty detecting coded prompts make it easier for users to bypass safeguards.

Testing by reporters suggests the model is more permissive than previous iterations, raising concerns about its ability to filter out offensive material before it is created.

With Google planning to integrate Veo 3 into YouTube Shorts, concerns are rising that harmful content may soon flood other platforms. TikTok and Google appear to lack the enforcement capacity to keep pace with the abuse of generative AI.

Despite strict rules on paper, both companies are struggling to prevent their technology from fuelling racist narratives at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta’s AI chatbots are designed to initiate conversations and enhance user engagement

Meta is training AI-powered chatbots that can remember previous conversations, send personalised follow-up messages, and actively re-engage users without needing a prompt.

Internal documents show that the company aims to keep users interacting longer across platforms like Instagram and Facebook by making bots more proactive and human-like.

Under the project code-named ‘Omni’, contractors from the firm Alignerr are helping train these AI agents using detailed personality profiles and memory-based conversations.

These bots are developed through Meta’s AI Studio — a no-code platform launched in 2024 that lets users build customised digital personas, from chefs and designers to fictional characters. Only after a user initiates a conversation can a bot send one follow-up, and that too within a 14-day window.

Bots must match their assigned personality and reference earlier interactions, offering relevant and light-hearted responses while avoiding emotionally charged or sensitive topics unless the user brings them up. Meta says the feature is being tested and rolled out gradually.

The company hopes it will not only improve user retention but also serve as a response to what CEO Mark Zuckerberg calls the ‘loneliness epidemic’.

With revenue from generative AI tools projected to reach up to $3 billion in 2025, Meta’s focus on more prolonged and engaging chatbot interactions appears to be as strategic as social.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

X to test AI-generated Community Notes

X, the social platform formerly known as Twitter, is preparing to test a new feature allowing AI chatbots to generate Community Notes.

These notes, a user-driven fact-checking system expanded under Elon Musk, are meant to provide context on misleading or ambiguous posts, such as AI-generated videos or political claims.

The pilot will enable AI systems like Grok or third-party large language models to submit notes via API. Each AI-generated comment will be treated the same as a human-written one, undergoing the same vetting process to ensure reliability.

However, concerns remain about AI’s tendency to hallucinate, where it may generate inaccurate or fabricated information instead of grounded fact-checks.

A recent research paper by the X Community Notes team suggests that AI and humans should collaborate, with people offering reinforcement learning feedback and acting as the final layer of review. The aim is to help users think more critically, not replace human judgment with machine output.

Still, risks persist. Over-reliance on AI, particularly models prone to excessive helpfulness rather than accuracy, could lead to incorrect notes slipping through.

There are also fears that human raters could become overwhelmed by a flood of AI submissions, reducing the overall quality of the system. X intends to trial the system over the coming weeks before any wider rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!