US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous AI agents are the next phase of enterprise automation

Organisations across sectors are turning to agentic automation—an emerging class of AI systems designed to think, plan, and act autonomously to solve complex, multi-step problems.

Unlike traditional automation tools, which follow rigid rules, agentic systems use large language models (LLMs) and robotic process automation (RPA) to navigate ambiguity and make contextual decisions.

‘Agentic automation is the next generation of automation,’ said UiPath VP Robbie Mackness. ‘It’s about creating systems that can observe, reason, and act with minimal human input.’

Early adopters include the financial sector, where over 25% of firms plan to deploy agentic solutions this year, according to Bank Automation News.

Companies like BlackLine are using it to automate high-judgement accounting tasks, while public sector agencies like the US Navy are trialling the technology for logistics and admin workloads. The recruitment industry is also exploring AI agents for candidate screening and initial assessments.

Experts caution that success depends on identifying the right use cases and implementing proper governance. Still, the potential is clear: agentic automation could unlock entirely new capabilities and redefine how complex work gets done.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Morrisons tests Tally robots amid job cut fears

Supermarket giant Morrisons has introduced shelf-scanning robots in several of its UK stores as part of a push to streamline operations and improve inventory accuracy.

The robots, known as Tally, are currently being trialled in three branches—Wetherby, Redcar, and Stockton—where they autonomously roam aisles to monitor product placement, stock levels, and pricing.

Developed by US-based Symbi Robotics, Tally is the world’s first autonomous item-scanning robot, capable of scanning up to 30,000 items per hour with 99% accuracy.

Already in use by major international retailers including Carrefour and Kroger, the robot is designed to operate in a range of retail environments, from chilled aisles to traditional shelves.

Morrisons says the robots will enhance store efficiency and reduce out-of-stock issues, but the move has sparked concern after reports that as many as 365 employees could lose their jobs due to automation.

The robots are part of a broader trend in retail toward AI-powered tools that boost productivity—but often at the expense of human labour.

Tally units are slim, mobile, and equipped with friendly digital faces. They return automatically to their charging stations when power runs low, and operate with minimal staff intervention.

While Morrisons has not confirmed a wider rollout in the UK, the trial reflects a growing shift in retail automation. As AI technologies evolve, companies are weighing the balance between operational gains and workforce impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Some Google apps are better off without AI

With Google I/O 2025 around the corner, concerns are growing about artificial intelligence creeping into every corner of Google’s ecosystem. While AI has enhanced tools like Gmail and Photos, some users are urging Google to leave certain apps untouched.

These include fan favourites like Emoji Kitchen, Google Keep, and Google Wallet, which continue to shine due to their simplicity and human-focused design. Critics argue that introducing generative AI to these apps could diminish what makes them special.

Emoji Kitchen’s handcrafted stickers, for example, are widely praised compared to Apple’s AI-driven alternatives. Likewise, Google Keep and Wallet are valued for their light, efficient interfaces that serve clear purposes without AI interference.

Even in environments where AI might seem useful, such as Android Auto and Google Flights, the call is for restraint. Users appreciate clear menus and limited distractions over chatbots making unsolicited suggestions.

As AI continues to dominate tech conversations, a growing number of voices are asking Google to preserve the balance between innovation and usability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tool predicts post-surgery infection risk

Leiden University Medical Center (LUMC) has developed a pioneering AI model, PERISCOPE, designed to predict infection risk in patients following surgery. PERISCOPE will become a standard tool at LUMC, with full implementation expected by mid-2026.

Based on data from over 250,000 surgical procedures, the tool provides a personalised risk assessment within seven to thirty days post-operation, helping healthcare providers intervene earlier and reduce complications.

The AI model, developed by PhD researcher Siri van der Meijden, uses pseudonymised patient data including medical history, vital signs and existing conditions to identify those most at risk.

During testing, PERISCOPE performed as well as experienced doctors and outperformed less experienced ones, making it a valuable decision-support tool. Once fully adopted, the tool is expected to save time, improve patient outcomes, and potentially predict other complications.

Rather than replace clinicians, it complements their judgement by offering a clear, visual dashboard of infection risk levels. Integration into hospital systems remains a challenge, but preparations are underway.

Van der Meijden continues to develop the model to expand its predictive capabilities and ensure long-term impact not only in the Netherlands, but globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lendlord introduces AI tools for property investors

Lendlord has launched LendlordAI, a suite of AI tools designed to support landlords and property investors with faster, smarter decision-making.

Available now to all users of the platform, the AI assistant offers instant insights into property listings, real-time deal analysis, and automated portfolio reviews.

The system helps estimate refurbishment costs and projected value for BRR and flip projects, while also generating summaries and even drafting emails for communication with agents or tenants.

These features aim to cut through information overload and support efficient portfolio management.

Co-founder and CEO Aviram Shahar described LendlordAI as a tailored smart assistant for professionals, reducing manual work and offering clarity in a complex investment market.

The platform also includes account-specific responses and educational resources to help users improve their knowledge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tool boosts delivery of children’s support plans

In the US, Stoke-on-Trent City Council has introduced AI to speed up the production of special educational needs reports amid growing demand. The new system is already showing results, with 83% of plans issued within the 20-week target in April, up from just 43% the previous year.

Traditionally compiled by individual case workers, Education, Health and Care Plans (EHCPs) are now being partially automated using AI trained to extract information from psychological and medical documents.

Despite the use of AI, a human case worker still reviews each report to check for accuracy and ensure the needs of the child are properly represented.

The aim is to improve both efficiency and the quality of reports by allowing staff to focus on substance rather than repetitive formatting tasks.

Councillors welcomed the move, highlighting the potential of technology to reduce backlogs and improve outcomes for families.

Alongside the AI rollout, the US council has hired more educational psychologists, reformed the application process, and increased early intervention efforts to manage rising demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI backs away from for-profit transition amid scrutiny

OpenAI has announced it will no longer pursue a full transition to a for-profit company. Instead, it will restructure its commercial arm as a public benefit corporation (PBC), retaining oversight by its nonprofit board.

The move comes after discussions with the attorneys general of California and Delaware, and growing concerns about governance and mission drift. The nonprofit board—best known for briefly removing CEO Sam Altman—will continue to oversee the company and appoint the PBC board.

Investors will now hold regular, uncapped equity in the PBC, replacing the previous 100x return cap, a change designed to attract future funding. The nonprofit will also gain a growing equity stake in the business arm.

In a message to staff, Altman said OpenAI remains committed to building AI that benefits humanity and sees this structure as the best path forward. Critics, including former staff, say questions remain about technology ownership and long-term priorities.

At the same time, Meta is positioning itself as a major rival. It recently launched a standalone AI assistant app, powered by its Llama 4 model and available across platforms including Ray-Ban smart glasses. The app includes a social Discover feed, encouraging interaction with shared AI outputs.

OpenAI’s new structure attempts to balance commercial growth with ethical governance—a model that may influence how other AI firms approach funding, control, and public accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US senator calls for AI chip tracking to protect national security

A new bill introduced by Republican Senator Tom Cotton aims to bolster national security by requiring location verification features on American-made AI chips.

The Chip Security Act, announced on 9 May, would ensure such technology does not end up in the hands of foreign adversaries, particularly China.

Cotton urged the US Departments of Commerce and Defence to assess how tracking mechanisms could help detect and prevent illegal chip exports.

He also called for stricter obligations for companies exporting AI chips, including notifying authorities if devices are tampered with or redirected from their original destinations.

The proposed legislation follows a policy shift announced on 7 May by the Trump administration to ease restrictions on AI chip exports previously imposed under President Biden.

Cotton argued that better security practices could allow US firms to expand globally without undermining the country’s technological edge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!