Google faces renewed EU scrutiny over AI competition

The European Commission has opened a formal antitrust investigation into whether AI features embedded in online search are being used to unfairly squeeze competitors in newly emerging digital markets shaped by generative AI.

The probe targets Alphabet-owned Google, focusing on allegations that the company imposes restrictive conditions on publishers and content creators while giving its own AI-driven services preferential placement over rival technologies and alternative search offerings.

Regulators are examining products such as AI Overviews and AI Mode, assessing how publisher content is reused within AI-generated summaries and whether media organisations are compensated in a clear, fair, and transparent manner.

EU competition chief Teresa Ribera said the European Commission’s action reflects a broader effort to protect online media and preserve competitive balance as artificial intelligence increasingly shapes how information is produced, discovered, and monetised.

The case adds to years of scrutiny by the European Commission over Google’s search and advertising businesses, even as the company proposes changes to its ad tech operations and continues to challenge earlier antitrust rulings.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Trump allows Nvidia to sell chips to approved Chinese customers

US President Donald Trump has allowed Nvidia to sell H200 AI chips to approved customers in China, marking a shift in export controls. The decision also covers firms such as AMD and follows continued lobbying by Nvidia chief executive Jensen Huang.

Nvidia had been barred from selling advanced chips to Beijing, but a partial reversal earlier required the firm to pay a share of its Chinese revenues to the US government. China later ordered firms to stop buying Nvidia products, pushing them towards domestic semiconductors.

Analysts suggest the new policy may buy time for negotiations over rare earth supplies, as China dominates processing of these minerals. Access to H200 chips may aid China’s tech sector, but experts warn they could also strengthen military AI capabilities.

Nvidia welcomed the announcement, saying the decision strikes a balance that benefits American industry. Shares rose slightly after the news, although the arrangement is expected to face scrutiny from national security advocates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Canada-EU digital partnership expands cooperation on AI and security

The European Union and Canada have strengthened their digital partnership during the first Digital Partnership Council in Montreal. Both sides outlined a joint plan to enhance competitiveness and innovation, while supporting smaller firms through targeted regulation.

Senior representatives reconfirmed that cooperation with like-minded partners will be essential for economic resilience.

A new Memorandum of Understanding on AI placed a strong emphasis on trustworthy systems, shared standards and wider adoption across strategic sectors.

The two partners will exchange best practices to support sectors such as healthcare, manufacturing, energy, culture and public services.

They also agreed to collaborate on large-scale AI infrastructures and access to computing capacity, while encouraging scientific collaboration on advanced AI models and climate-related research.

A meeting that also led to an agreement on a structured dialogue on data spaces.

A second Memorandum of Understanding covered digital credentials and trust services. The plan includes joint testing of digital identity wallets, pilot projects and new use cases aimed at interoperability.

The EU and Canada also intend to work more closely on the protection of independent media, the promotion of reliable information online and the management of risks created by generative AI.

Both sides underlined their commitment to secure connectivity, with cooperation on 5G, subsea cables and potential new Arctic routes to strengthen global network resilience. Further plans aim to deepen collaboration on quantum technologies, semiconductors and high-performance computing.

A renewed partnership that reflects a shared commitment to resilient supply chains and secure cloud infrastructure as both regions prepare for future technological demands.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Survey reveals split views on AI in academic peer review

Growing use of generative AI within peer review is creating a sharp divide among physicists, according to a new survey by the Institute of Physics Publishing.

Researchers appear more informed and more willing to express firm views, with a notable rise in those who see a positive effect and a large group voicing strong reservations. Many believe AI tools accelerate early reading and help reviewers concentrate on novelty instead of routine work.

Others fear that reviewers might replace careful evaluation with automated text generation, undermining the value of expert judgement.

A sizeable proportion of researchers would be unhappy if AI-shaped assessments of their own papers, even though many quietly rely on such tools when reviewing for journals. Publishers are now revisiting their policies, yet they aim to respect authors who expect human-led scrutiny.

Editors also report that AI-generated reports often lack depth and fail to reflect domain expertise. Concerns extend to confidentiality, with organisations such as the American Physical Society warning that uploading manuscripts to chatbots can breach author trust.

Legal disputes about training data add further uncertainty, pushing publishers to approach policy changes with caution.

Despite disagreements, many researchers accept that AI will remain part of peer review as workloads increase and scientific output grows. The debate now centres on how to integrate new tools in a way that supports researchers instead of weakening the foundations of scholarly communication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creatives warn that AI is reshaping their jobs

AI is accelerating across creative fields, raising concerns among workers who say the technology is reshaping livelihoods faster than anyone expected.

A University of Cambridge study recently found that more than two-thirds of creative professionals fear AI has undermined their job security, and many now describe the shift as unavoidable.

One of them is Norwich-based artist Aisha Belarbi, who says the rise of image-generation tools has made commissions harder to secure as clients ‘can just generate whatever they want’. Although she works in both traditional and digital media, Belarbi says she increasingly struggles to distinguish original art from AI output. That uncertainty, she argues, threatens the value of lived experience and the labour behind creative work.

Others are embracing the change. Videographer JP Allard transformed his Milton Keynes production agency after discovering the speed and scale of AI-generated video. His company now produces multilingual ‘digital twins’ and fully AI-generated commercials, work he says is quicker and cheaper than traditional filming. Yet he acknowledges that the pace of change can leave staff behind and says retraining has not kept up with the technology.

For musician Ross Stewart, the concern centres on authenticity. After listening to what he later discovered was an AI-generated blues album, he questioned the impact of near-instant song creation on musicians’ livelihoods and exposure. He believes audiences will continue to seek human performance, but worries that the market for licensed music is already shifting towards AI alternatives.

Copywriter Niki Tibble has experienced similar pressures. Returning from maternity leave, she found that AI tools had taken over many entry-level writing tasks. While some clients still prefer human writers for strategy, nuance and brand voice, Tibble’s work has increasingly shifted toward reviewing and correcting AI-generated copy. She says the uncertainty leaves her unsure whether her role will exist in a decade.

Across these stories, creative workers describe a sector in rapid transition. While some see new opportunities, many fear the speed of adoption and a future where AI replaces the very work that has long defined their craft.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New interview study tracks how workers adapt to AI

Anthropic has unveiled Anthropic Interviewer, an AI-driven tool for large-scale workplace interviews. The system used Claude to conduct 1,250 structured interviews with professionals across the general workforce, creative fields and scientific research.

In surveys, 86 percent said AI saves time and 65 percent felt satisfied with its role at work. Workers often hoped to automate routine tasks while preserving responsibilities that define their professional identity.

Creative workers reported major time savings and quality gains yet faced stigma and economic anxiety around AI use. Many hid AI tools from colleagues, feared market saturation and still insisted on retaining creative control.

Across groups, professionals imagined careers where humans oversee AI systems rather than perform every task themselves. Anthropic plans to keep using Anthropic Interviewer to track attitudes and inform future model design.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Japan weighs easing rules on personal data use

Japan is preparing to relax restrictions on personal data use to support rapid AI development. Government sources say a draft bill aims to expand third-party access to sensitive information.

Plans include allowing medical histories and criminal records to be obtained without consent for statistical purposes. Japanese officials argue such access could accelerate research while strengthening domestic competitiveness.

New administrative fines would target companies that profit from unlawfully acquired data affecting large groups. Penalties would match any gains made through misconduct, reflecting growing concern over privacy abuses.

A government panel has reviewed the law since 2023 and intends to present reforms soon. Debate is expected to intensify as critics warn of increased risks to individual rights if support for AI development in this regard continues.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tilly Norwood creator accelerates AI-first entertainment push

The AI talent studio behind synthetic actress Tilly Norwood is preparing to expand what it calls the ‘Tilly-verse’, moving into a new phase of AI-first entertainment built around multiple digital characters.

Xicoia, founded by Particle6 and Tilly creator Eline van der Velden, is recruiting for 9 roles spanning writing, production, growth, and AI development, including a junior comedy writer, a social media manager, and a senior ‘AI wizard-in-chief’.

The UK-based studio says the hires will support Tilly’s planned 2026 expansion into on-screen appearances and direct fan interaction, alongside the introduction of new AI characters designed to coexist within the same fictional universe.

Van der Velden argues the project creates jobs rather than replacing them, positioning the studio as a response to anxieties around AI in entertainment and rejecting claims that Tilly is meant to displace human performers.

Industry concerns persist, however, with actors’ representatives disputing whether synthetic creations can be considered performers at all and warning that protecting human artists’ names, images, and likenesses remains critical as AI adoption accelerates.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Noyb study points to demand for tracking-free option

A new study commissioned by noyb reports that most users favour a tracking-free advertising option when navigating Pay or Okay systems. Researchers found low genuine support for data collection when participants were asked without pressure.

Consent rates rose sharply when users were presented only with payment or agreement to tracking, leading most to select consent. Findings indicate that the absence of a realistic alternative shapes outcomes more than actual preference.

Introduction of a third option featuring advertising without tracking prompted a strong shift, with most participants choosing that route. Evidence suggests users accept ad-funded models provided their behavioural data remains untouched.

Researchers observed similar patterns on social networks, news sites and other platforms, undermining claims that certain sectors require special treatment. Debate continues as regulators assess whether Pay or Okay complies with EU data protection rules such as the GDPR.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK lawmakers push for binding rules on advanced AI

Growing political pressure is building in Westminster as more than 100 parliamentarians call for binding regulation on the most advanced AI systems, arguing that current safeguards lag far behind industry progress.

A cross-party group, supported by former defence and AI ministers, warns that unregulated superintelligent models could threaten national and global security.

The campaign, coordinated by Control AI and backed by tech figures including Skype co-founder Jaan Tallinn, urges Prime Minister Keir Starmer to distance the UK from the US stance against strict federal AI rules.

Experts such as Yoshua Bengio and senior peers argue that governments remain far behind AI developers, leaving companies to set the pace with minimal oversight.

Calls for action come after warnings from frontier AI scientists that the world must decide by 2030 whether to allow highly advanced systems to self-train.

Campaigners want the UK to champion global agreements limiting superintelligence development, establish mandatory testing standards and introduce an independent watchdog to scrutinise AI use in the public sector.

Government officials maintain that AI is already regulated through existing frameworks, though critics say the approach lacks urgency.

Pressure is growing for new, binding rules on the most powerful models, with advocates arguing that rapid advances mean strong safeguards may be needed within the next two years.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot