UK actors’ union demands rights as AI uses performers’ likenesses without consent

The British performers’ union Equity has warned of coordinated mass action against technology companies and entertainment producers that use its members’ images, voices or likenesses in artificial-intelligence-generated content without proper consent.

Equity’s general secretary, Paul W Fleming, announced plans to mobilise tens of thousands of actors through subject access requests under data-protection law, compelling companies to disclose whether they have used performers’ data in AI content.

The move follows growing numbers of complaints from actors about alleged mis-use of their likenesses or voices in AI material. One prominent case involves Scottish actor Briony Monroe, who claims her facial features and mannerisms were used to create the synthetic performer ‘Tilly Norwood’. The AI-studio behind the character denies the allegations.

Equity says the strategy is intended to ‘make it so hard for tech companies and producers to not enter into collective rights’ deals. It argues that existing legislation is being circumvented as foundational AI models are trained using data from actors, but with little transparency or compensation.

The trade body Pact, representing studios and producers, acknowledges the importance of AI but counters that without accessing new tools firms may fall behind commercially. Pact complains about the lack of transparency from companies on what data is used to train AI systems.

In essence, the standoff reflects deeper tensions in the creative industries: how to balance innovation, performer rights and transparency in an era when digital likenesses and synthetic ‘actors’ are emerging rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants fund teacher AI training amid classroom chatbot push

Major technology companies are shifting strategic emphasis toward education by funding teacher training in artificial intelligence. Companies such as Microsoft, OpenAI and Anthropic have pledged millions of dollars to train educators and bring chatbots into classrooms.

Under a deal with the American Federation of Teachers (AFT) in the United States, Microsoft will contribute $12.5 million over five years, OpenAI will provide $8 million plus $2 million in technical resources, and Anthropic has pledged $500,000. The AFT plans to build AI training hubs, including one in New York, and aims to train around 400,000 teachers over five years.

At a workshop in San Antonio, dozens of teachers used AI tools such as ChatGPT, Google’s Gemini and Microsoft CoPilot to generate lesson plans, podcasts and bilingual flashcards. One teacher noted how quickly AI could generate materials: ‘It can save you so much time.’

However, the initiative raises critical questions. Educators expressed concerns about being replaced by AI, while unions emphasise that teachers must lead training content and maintain control over learning. Technology companies see this as a way to expand into education, but also face scrutiny over influence and the implications for teaching practice.

As schools increasingly adopt AI tools, experts say training must go beyond technical skills to cover ethical use, student data protection and critical thinking. The reforms reflect a broader push to prepare both teachers and students for a future defined by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Civil groups question independence of Irish privacy watchdog

More than 40 civil society organisations have asked the European Commission to investigate Ireland’s privacy regulator. Their letter questions whether the Irish Data Protection Commission (DPC) remains independent following the appointment of a former Meta lobbyist as Commissioner.

Niamh Sweeney, previously Facebook’s head of public policy for Ireland, became the DPC’s third commissioner in September. Her appointment has triggered concerns among digital rights groups that oversee compliance with the EU’s General Data Protection Regulation.

The letter calls for a formal work programme to ensure that data protection rules are enforced consistently and free from political or corporate influence. Civil society groups argue that effective oversight is essential to preserve citizens’ trust and uphold the GDPR’s credibility.

The DPC, headquartered in Dublin, supervises major tech firms such as Meta, Apple, and Google under the EU’s privacy regime. Critics have long accused it of being too lenient toward large companies operating in Ireland’s digital sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Public consultation flaws risk undermining Digital Fairness Act debate

As the European Commission’s public consultation on the Digital Fairness Act enters its final phase, growing criticism points to flaws in how citizen feedback is collected.

Critics say the survey’s structure favours those who support additional regulation while restricting opportunities for dissenting voices to explain their reasoning. The issue raises concerns over how such results may influence the forthcoming impact assessment.

The Call for Evidence and Public Consultation, hosted on the Have Your Say portal, allows only supporters of the Commission’s initiative to provide detailed responses. Those who oppose new regulation are reportedly limited to choosing a single option with no open field for justification.

Such an approach risks producing a partial view of European opinion rather than a balanced reflection of stakeholders’ perspectives.

Experts argue that this design contradicts the EU’s Better Regulation principles, which emphasise inclusivity and objectivity.

They urge the Commission to raise its methodological standards, ensuring surveys are neutral, questions are not loaded, and all respondents can present argument-based reasoning. Without these safeguards, consultations may become instruments of validation instead of genuine democratic participation.

Advocates for reform believe the Commission’s influence could set a positive precedent for the entire policy ecosystem. By promoting fairer consultation practices, the EU could encourage both public and private bodies to engage more transparently with Europe’s diverse digital community.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia faces traffic decline as AI and social video reshape online search

Wikipedia’s human traffic has fallen by 8% over the past year, a decline the Wikimedia Foundation attributes to changing information habits driven by AI and social media.

The foundation’s Marshall Miller explained that updates to Wikipedia’s bot detection system revealed much of the earlier traffic surge came from undetected bots, revealing a sharper drop in genuine visits.

Miller pointed to the growing use of AI-generated search summaries and the rise of short-form video as key factors. Search engines now provide direct answers using generative AI instead of linking to external sources, while younger users increasingly turn to social video platforms rather than traditional websites.

Although Wikipedia’s knowledge continues to feed AI models, fewer people are reaching the original source.

The foundation warns that the shift poses risks to Wikipedia’s volunteer-driven ecosystem and donation-based model. With fewer visitors, fewer contributors may update content and fewer donors may provide financial support.

Miller urged AI companies and search engines to direct users back to the encyclopedia, ensuring both transparency and sustainability.

Wikipedia is responding by developing a new framework for content attribution and expanding efforts to reach new readers. The foundation also encourages users to support human-curated knowledge by citing original sources and recognising the people behind the information that powers AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI becomes a new spiritual guide for worshippers in India

Across India, a growing number of worshippers are using AI for spiritual guidance. From chatbots like GitaGPT to robotic deities in temples, technology is changing how people connect with faith.

Apps trained on Hindu scriptures offer personalised advice, often serving as companions for those seeking comfort and purpose in a rapidly changing world.

Developers such as Vikas Sahu have built AI chatbots based on the Bhagavad Gita, attracting thousands of users in just days. Major organisations like the Isha Foundation have also adopted AI to deliver ancient wisdom through modern apps, blending spiritual teachings with accessibility.

Large religious gatherings, including the Maha Kumbh Mela, now use AI tools and virtual reality to guide and connect millions of devotees.

While many find inspiration in AI-guided spirituality, experts warn of ethical and cultural challenges. Anthropologist Holly Walters notes that users may perceive AI-generated responses as divine truth, which could distort traditional belief systems.

Oxford researcher Lyndon Drake adds that AI might challenge the authority of religious leaders, as algorithms shape interpretations of sacred texts.

Despite the risks, faith-driven AI continues to thrive. For some devotees, digital gods and chatbots offer something traditional structures often cannot- immediate, non-judgemental access to spiritual guidance at any time.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Harvard’s health division supports AI-powered medical learning

Harvard Health Publishing has partnered with Microsoft to use its health content to train the Copilot AI system. The collaboration seeks to enhance the accuracy of healthcare responses on Microsoft’s AI platform, according to the Wall Street Journal.

HHP publishes consumer health resources reviewed by Harvard scientists, covering topics such as sleep, nutrition, and pain management. The institution confirmed that Microsoft has paid to license its articles, expanding a previous agreement made in 2022.

The move is designed to make medically verified information more accessible to the public through Copilot, which now reaches over 33 million users.

Harvard’s Soroush Saghafian said the deal could help cut errors in AI-generated medical advice, a key concern in healthcare. He emphasised the importance of rigorous testing before deployment, warning that unverified tools could pose serious risks to users.

Harvard continues to invest in AI research and integration across its academic programmes. Recent initiatives include projects to address bias in medical training and studies exploring AI’s role in drug development and cancer treatment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

UK government urges awareness as £106m lost to romance fraud in one year

Romance fraud has surged across the United Kingdom, with new figures showing that victims lost a combined £106 million in the past financial year. Action Fraud, the UK’s national reporting centre for cybercrime, described the crime as one that causes severe financial, emotional, and social damage.

Among the victims is London banker Varun Yadav, who lost £40,000 to a scammer posing as a romantic partner on a dating app. After months of chatting online, the fraudster persuaded him to invest in a cryptocurrency platform.

When his funds became inaccessible, Yadav realised he had been deceived. ‘You see all the signs, but you are so emotionally attached,’ he said. ‘You are willing to lose the money, but not the connection.’

The Financial Conduct Authority (FCA) said banks should play a stronger role in disrupting romance scams, calling for improved detection systems and better staff training to identify vulnerable customers. It urged firms to adopt what it called ‘compassionate aftercare’ for those affected.

Romance fraud typically involves criminals creating fake online profiles to build emotional connections before manipulating victims into transferring money.

The National Cyber Security Centre (NCSC) and UK police recommend maintaining privacy on social media, avoiding financial transfers to online contacts, and speaking openly with friends or family before sending money.

The Metropolitan Police recently launched an awareness campaign featuring victim testimonies and guidance on spotting red flags. The initiative also promotes collaboration with dating apps, banks, and social platforms to identify fraud networks.

Detective Superintendent Kerry Wood, head of economic crime for the Met Police, said that romance scams remain ‘one of the most devastating’ forms of fraud. ‘It’s an abuse of trust which undermines people’s confidence and sense of self-worth. Awareness is the most powerful defence against fraud,’ she said.

Although Yadav never recovered his savings, he said sharing his story helped him rebuild his life. He urged others facing similar scams to speak up: ‘Do not isolate yourself. There is hope.’

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Meta to pull all political ads in EU ahead of new transparency law

Meta Platforms has said it will stop selling and showing political, electoral and social issue advertisements across its services in the European Union from early October 2025. The decision follows the EU’s Transparency and Targeting of Political Advertising (TTPA) regulation coming into full effect on 10 October.

Under TTPA, platforms will be required to clearly label political ads, disclose the sponsor, the election or social issue at hand, the amounts paid, and how the ads are targeted. These obligations also include strict conditions on targeting and require explicit consent for certain data use.

Meta called the requirements ‘significant operational challenges and legal uncertainties’ and labelled parts of the new rules ‘unworkable’ for advertisers and platforms. It said that personalised ads are widely used for issue-based campaigns and that limiting them might restrict how people access political or social issue-related information.

The company joins Google, which made a similar move last year citing comparable concerns about TTPA compliance.

While political ads will be banned under paid formats, Meta says organic political content (e.g. users posting or sharing political views) will still be permitted.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot