UK actors’ union demands rights as AI uses performers’ likenesses without consent

The British performers’ union Equity has warned of coordinated mass action against technology companies and entertainment producers that use its members’ images, voices or likenesses in artificial-intelligence-generated content without proper consent.

Equity’s general secretary, Paul W Fleming, announced plans to mobilise tens of thousands of actors through subject access requests under data-protection law, compelling companies to disclose whether they have used performers’ data in AI content.

The move follows growing numbers of complaints from actors about alleged mis-use of their likenesses or voices in AI material. One prominent case involves Scottish actor Briony Monroe, who claims her facial features and mannerisms were used to create the synthetic performer ‘Tilly Norwood’. The AI-studio behind the character denies the allegations.

Equity says the strategy is intended to ‘make it so hard for tech companies and producers to not enter into collective rights’ deals. It argues that existing legislation is being circumvented as foundational AI models are trained using data from actors, but with little transparency or compensation.

The trade body Pact, representing studios and producers, acknowledges the importance of AI but counters that without accessing new tools firms may fall behind commercially. Pact complains about the lack of transparency from companies on what data is used to train AI systems.

In essence, the standoff reflects deeper tensions in the creative industries: how to balance innovation, performer rights and transparency in an era when digital likenesses and synthetic ‘actors’ are emerging rapidly.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Tech giants fund teacher AI training amid classroom chatbot push

Major technology companies are shifting strategic emphasis toward education by funding teacher training in artificial intelligence. Companies such as Microsoft, OpenAI and Anthropic have pledged millions of dollars to train educators and bring chatbots into classrooms.

Under a deal with the American Federation of Teachers (AFT) in the United States, Microsoft will contribute $12.5 million over five years, OpenAI will provide $8 million plus $2 million in technical resources, and Anthropic has pledged $500,000. The AFT plans to build AI training hubs, including one in New York, and aims to train around 400,000 teachers over five years.

At a workshop in San Antonio, dozens of teachers used AI tools such as ChatGPT, Google’s Gemini and Microsoft CoPilot to generate lesson plans, podcasts and bilingual flashcards. One teacher noted how quickly AI could generate materials: ‘It can save you so much time.’

However, the initiative raises critical questions. Educators expressed concerns about being replaced by AI, while unions emphasise that teachers must lead training content and maintain control over learning. Technology companies see this as a way to expand into education, but also face scrutiny over influence and the implications for teaching practice.

As schools increasingly adopt AI tools, experts say training must go beyond technical skills to cover ethical use, student data protection and critical thinking. The reforms reflect a broader push to prepare both teachers and students for a future defined by AI.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI transforms Japanese education while raising ethical questions

AI is reshaping Japanese education, from predicting truancy risks to teaching English and preserving survivor memories. Schools and universities nationwide are experimenting with systems designed to support teachers and engage students more effectively.

In Saitama’s Toda City, AI analysed attendance, health records, and bullying data to identify pupils at risk of skipping school. During a 2023 pilot, it flagged more than a thousand students and helped teachers prioritise support for those most vulnerable.

Experts praised the system’s potential but warned against excessive dependence on algorithms. Keio University’s Professor Makiko Nakamuro said educators must balance data-driven insights with privacy safeguards and human judgment. Toda City has already banned discriminatory use of AI results.

AI’s role is also expanding in language learning. Universities such as Waseda and Kyushu now use a Tokyo-developed conversation AI that assesses grammar, pronunciation, and confidence. Students say they feel more comfortable practising with a machine than in front of classmates.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EU expands AI reach through new antenna network

The European Commission has launched new ‘AI Antennas’ across 13 European countries to strengthen AI infrastructure. Seven EU states, including Belgium, Ireland, and Malta, will gain access to high-performance computing through the EuroHPC network.

Six non-EU partners, such as the UK and Switzerland, have also joined the initiative. Their inclusion reflects the EU’s growing cooperation on digital innovation with neighbouring countries despite Brexit and other trade tensions.

Each AI Antenna will serve as a local gateway to the bloc’s supercomputing hubs, providing technical support, training, and algorithmic resources. Countries without an AI Factory of their own can now connect remotely to major systems like Jupiter.

The Commission says the network aims to spread AI skills and research capabilities across Europe, narrowing regional gaps in digital development. However, smaller nations hosting only antennas are unlikely to house the bloc’s future ‘AI Gigafactories’, which will be up to four times more powerful.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Swiss scientists grow mini-brains to power future computers

In a Swiss laboratory, researchers are using clusters of human brain cells to power experimental computers. The start-up FinalSpark is leading this emerging field of biocomputing, also known as wetware, which uses living neurons instead of silicon chips.

Co-founder Fred Jordan said biological neurons are vastly more energy-efficient than artificial ones and could one day replace traditional processors. He believes brain-based computing may eventually help reduce the massive power demands created by AI systems.

Each ‘bioprocessor’ is made from human skin cells reprogrammed into neurons and grouped into small organoids. Electrodes connect to these clumps, allowing the Swiss scientists to send signals and measure their responses in a digital form similar to binary code.

Scientists emphasise that the technology is still in its infancy and not capable of consciousness. Each organoid contains about ten thousand neurons, compared to a human brain’s hundred billion. FinalSpark collaborates with ethicists to ensure the research remains responsible and transparent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

SMEs underinsured as Canada’s cyber landscape shifts

Canada’s cyber insurance market is stabilising, with stronger underwriting, steadier loss trends, and more product choice, the Insurance Bureau of Canada says. But the threat landscape is accelerating as attackers weaponise AI, leaving many small and medium-sized enterprises exposed and underinsured.

Rapid market growth brought painful losses during the ransomware surge: from 2019 to 2023, combined loss ratios averaged about 155%, forcing tighter pricing and coverage. Insurers have recalibrated, yet rising AI-enabled phishing and deepfake impersonations are lifting complexity and potential severity.

Policy is catching up unevenly. Bill C-8 in Canada would revive critical-infrastructure cybersecurity standards, stronger oversight, and baseline rules for risk management and incident reporting. Public–private programmes signal progress but need sustained execution.

SMEs remain the pressure point. Low uptake means minor breaches can cost tens or hundreds of thousands, while severe incidents can be fatal. Underinsurance shifts shock to the wider economy, challenging insurers to balance affordability with long-term viability.

The Bureau urges practical resilience: clearer governance, employee training, incident playbooks, and fit-for-purpose cover. Education campaigns and free guidance aim to demystify coverage, boost readiness, and help SMEs recover faster when attacks hit, supporting a more durable digital economy.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Wikipedia faces traffic decline as AI and social video reshape online search

Wikipedia’s human traffic has fallen by 8% over the past year, a decline the Wikimedia Foundation attributes to changing information habits driven by AI and social media.

The foundation’s Marshall Miller explained that updates to Wikipedia’s bot detection system revealed much of the earlier traffic surge came from undetected bots, revealing a sharper drop in genuine visits.

Miller pointed to the growing use of AI-generated search summaries and the rise of short-form video as key factors. Search engines now provide direct answers using generative AI instead of linking to external sources, while younger users increasingly turn to social video platforms rather than traditional websites.

Although Wikipedia’s knowledge continues to feed AI models, fewer people are reaching the original source.

The foundation warns that the shift poses risks to Wikipedia’s volunteer-driven ecosystem and donation-based model. With fewer visitors, fewer contributors may update content and fewer donors may provide financial support.

Miller urged AI companies and search engines to direct users back to the encyclopedia, ensuring both transparency and sustainability.

Wikipedia is responding by developing a new framework for content attribution and expanding efforts to reach new readers. The foundation also encourages users to support human-curated knowledge by citing original sources and recognising the people behind the information that powers AI systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Australian students get 12 months of Google Gemini Pro at no cost

Google has launched a free twelve-month Gemini Pro plan for students in Australia aged eighteen and over, aiming to make AI-powered learning more accessible.

The offer includes the company’s most advanced tools and features designed to enhance study efficiency and critical thinking.

A key addition is Guided Learning mode, which acts as a personal AI coach. Instead of quick answers, it walks students through complex subjects step by step, encouraging a deeper understanding of concepts.

Gemini now also integrates diagrams, images and YouTube videos into responses to make lessons more visual and engaging.

Students can create flashcards, quizzes and study guides automatically from their own materials, helping them prepare for exams more effectively. The Gemini Pro account upgrade provides access to Gemini 2.5 Pro, Deep Research, NotebookLM, Veo 3 for short video creation, and Jules, an AI coding assistant.

With two terabytes of storage and the full suite of Google’s AI tools, the Gemini app aims to support Australian students in their studies and skill development throughout the academic year.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta champions open hardware to power the next generation of AI data centres

The US tech giant, Meta, believes open hardware will define the future of AI data centre infrastructure. Speaking at the Open Compute Project Global Summit, the company outlined a series of innovations designed to make large-scale AI systems more efficient, sustainable, and collaborative.

Meta, one of the OCP’s founding members, said open source hardware remains essential to scaling the physical infrastructure required for the next generation of AI.

During the summit, Meta joined industry peers in supporting OCP’s Open Data Center Initiative, which calls for shared standards in power, cooling, and mechanical design.

The company also unveiled a new generation of network fabrics for AI training clusters, integrating NVIDIA’s Spectrum Ethernet to enable greater flexibility and performance.

As part of the effort, Meta became an initiating member of Ethernet for Scale-Up Networking, aiming to strengthen connectivity across increasingly complex AI systems.

Meta further introduced the Open Rack Wide (ORW) form factor, an open source data rack standard optimised for the power and cooling demands of modern AI.

Built on ORW specifications, AMD’s new Helios rack was presented as the most advanced AI rack yet, embodying the shift toward interoperable and standardised infrastructure.

Meta also showcased new AI hardware platforms built to improve performance and serviceability for large-scale generative AI workloads.

Sustainability remains central to Meta’s strategy. The company presented ‘Design for Sustainability’, a framework to reduce hardware emissions through modularity, reuse, and extended lifecycles.

It also shared how its Llama AI models help track emissions across millions of components. Meta said it will continue to

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta previews parental controls over teen AI character chats

Meta has previewed upcoming parental control features for its AI experiences, particularly aimed at teens’ interactions with AI characters. The new tools are expected to roll out next year.

Under the proposed controls, parents will be able to turn off chats between teens and AI characters altogether, though the broader Meta AI chatbot remains accessible. They can also block specific characters if they wish. Parents will receive topic summaries of what teens are discussing with AI characters and with Meta AI itself.

The first deployment will be on Instagram, with initial availability in English for the US, UK, Canada and Australia. Meta says it recognises the challenges parents face in guiding children through new technology, and wants these tools to simplify oversight.

Meta also notes that AI content and experiences intended for teens will follow a PG-13 standard: avoiding extreme violence, nudity and graphic drug content. Teens currently interact with only a limited set of AI characters under age-appropriate guidelines.

Additionally, Meta plans to allow time limits on AI character use by teens. The company is also detecting and discouraging attempts by users to falsify their age to bypass restrictions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot