Google tests AI tool to automate software development

Google is internally testing an advanced AI tool designed to support software engineers through the entire development cycle, according to The Information. The firm is also expected to demonstrate integration between its Gemini chatbot in voice mode and Android-powered XR headsets.

The agentic AI assistant is said to handle tasks such as code generation and documentation, and has already been previewed to staff and developers ahead of Google’s I/O conference on 20 May. The move reflects a wider trend among tech giants racing to automate programming.

Amazon is developing its own coding assistant, Kiro, which can process both text and visual inputs, detect bugs, and auto-document code. While AWS initially targeted a June launch, the current release date remains uncertain.

Microsoft and Google have claimed that around 30% of their code is now AI-generated. OpenAI is also eyeing expansion, reportedly in talks to acquire AI coding start-up Windsurf for $3 billion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jamie Lee Curtis calls out Zuckerberg over AI scam using her likeness

Jamie Lee Curtis has directly appealed to Mark Zuckerberg after discovering her likeness had been used without consent in an AI-generated advert.

Posting on Facebook, Curtis expressed her frustration with Meta’s lack of proper channels to report such abuse, stating she had exhausted all official avenues before resorting to a public plea.

The fake video reportedly manipulated footage from an emotional interview following the January wildfires in Los Angeles, inserting false statements under the guise of a product endorsement.

Instead of remaining silent, Curtis urged Zuckerberg to take action, saying the unauthorised content damaged her integrity and voice. Within hours of her public callout, Meta confirmed the video had been removed for breaching its policies, a rare example of a swift response.

‘It worked! Yay Internet! Shame has its value!’ she wrote in a follow-up, though she also highlighted the broader risks posed by deepfakes.

The actress joins a growing list of celebrities, including Taylor Swift and Scarlett Johansson, who’ve been targeted by AI misuse.

Swift was forced to publicly clarify her political stance after an AI video falsely endorsed Donald Trump, while Johansson criticised OpenAI for allegedly using a voice nearly identical to hers despite her refusal to participate in a project.

The issue has reignited concerns around consent, misinformation and the exploitation of public figures.

Instead of waiting for further harm, lawmakers in California have already begun pushing back. New legislation signed by Governor Gavin Newsom aims to protect performers from unauthorised digital replicas and deepfakes.

Meanwhile, in Washington, proposals like the No Fakes Act seek to hold tech platforms accountable, possibly fining them thousands per violation. As Curtis and others warn, without stronger protections, the misuse of AI could spiral further, threatening not just celebrities but the public as a whole.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Morrisons tests Tally robots amid job cut fears

Supermarket giant Morrisons has introduced shelf-scanning robots in several of its UK stores as part of a push to streamline operations and improve inventory accuracy.

The robots, known as Tally, are currently being trialled in three branches—Wetherby, Redcar, and Stockton—where they autonomously roam aisles to monitor product placement, stock levels, and pricing.

Developed by US-based Symbi Robotics, Tally is the world’s first autonomous item-scanning robot, capable of scanning up to 30,000 items per hour with 99% accuracy.

Already in use by major international retailers including Carrefour and Kroger, the robot is designed to operate in a range of retail environments, from chilled aisles to traditional shelves.

Morrisons says the robots will enhance store efficiency and reduce out-of-stock issues, but the move has sparked concern after reports that as many as 365 employees could lose their jobs due to automation.

The robots are part of a broader trend in retail toward AI-powered tools that boost productivity—but often at the expense of human labour.

Tally units are slim, mobile, and equipped with friendly digital faces. They return automatically to their charging stations when power runs low, and operate with minimal staff intervention.

While Morrisons has not confirmed a wider rollout in the UK, the trial reflects a growing shift in retail automation. As AI technologies evolve, companies are weighing the balance between operational gains and workforce impact.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lendlord introduces AI tools for property investors

Lendlord has launched LendlordAI, a suite of AI tools designed to support landlords and property investors with faster, smarter decision-making.

Available now to all users of the platform, the AI assistant offers instant insights into property listings, real-time deal analysis, and automated portfolio reviews.

The system helps estimate refurbishment costs and projected value for BRR and flip projects, while also generating summaries and even drafting emails for communication with agents or tenants.

These features aim to cut through information overload and support efficient portfolio management.

Co-founder and CEO Aviram Shahar described LendlordAI as a tailored smart assistant for professionals, reducing manual work and offering clarity in a complex investment market.

The platform also includes account-specific responses and educational resources to help users improve their knowledge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New AI tool boosts delivery of children’s support plans

In the US, Stoke-on-Trent City Council has introduced AI to speed up the production of special educational needs reports amid growing demand. The new system is already showing results, with 83% of plans issued within the 20-week target in April, up from just 43% the previous year.

Traditionally compiled by individual case workers, Education, Health and Care Plans (EHCPs) are now being partially automated using AI trained to extract information from psychological and medical documents.

Despite the use of AI, a human case worker still reviews each report to check for accuracy and ensure the needs of the child are properly represented.

The aim is to improve both efficiency and the quality of reports by allowing staff to focus on substance rather than repetitive formatting tasks.

Councillors welcomed the move, highlighting the potential of technology to reduce backlogs and improve outcomes for families.

Alongside the AI rollout, the US council has hired more educational psychologists, reformed the application process, and increased early intervention efforts to manage rising demand.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alphabet stock dips as AI tools begin to dent Google search volumes

Alphabet shares fell sharply on Wednesday following courtroom testimony that Google searches on Apple’s Safari browser declined in April—reportedly for the first time ever.

Apple’s senior executive Eddy Cue said the drop came as users increasingly turned to AI tools like ChatGPT and Perplexity instead of traditional search engines.

The market reaction was swift, with Alphabet losing ground before partially recovering after Google clarified that overall search volumes remain on the rise.

Several analysts argued the sell-off may have been exaggerated, noting Apple’s incentive to downplay Google’s dominance as the companies face antitrust scrutiny. In 2022, Google reportedly paid Apple $20 billion to remain Safari’s default search provider.

Still, some analysts warn of a longer-term shift. Tech veteran Gene Munster called it the ‘beginning of the decline’, arguing that the way people find information is undergoing a fundamental change. Unlike search results pages, AI assistants provide direct answers—undermining Google’s ad-driven revenue model.

While Alphabet still owns a broad portfolio including YouTube, Android, Google Cloud and autonomous driving company Waymo, its core business is facing structural headwinds.

Investors are already adjusting expectations. Alphabet’s price-to-earnings ratio has dropped to 18, down from a 10-year average of 28, reflecting growing concerns around disruption.

Some see an opportunity; others, a reckoning. Whether this moment marks a short-term dip or a longer-term revaluation will depend on how Google adapts to the AI-driven shift in how people search for—and monetise—information.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK artists urge PM to shield creative work from AI exploitation

More than 400 prominent British artists, including Dua Lipa, Elton John, and Sir Ian McKellen, have signed a letter urging Prime Minister Keir Starmer to update UK copyright laws to protect their work from being used without consent in training AI systems. The signatories argue that current laws leave their creative output vulnerable to exploitation by tech companies, which could ultimately undermine the UK’s status as a global cultural leader.

The artists are backing a proposed amendment to the Data (Use and Access) Bill by Baroness Beeban Kidron, requiring AI developers to disclose when and how they use copyrighted materials. They believe this transparency could pave the way for licensing agreements that respect the rights of creators while allowing responsible AI development.

Nobel laureate Kazuo Ishiguro and music legends like Paul McCartney and Kate Bush have joined the call, warning that creators risk ‘giving away’ their life’s work to powerful tech firms. While the government insists it is consulting all parties to ensure a balanced outcome that supports both the creative sector and AI innovation, not everyone supports the amendment.

Critics, like Julia Willemyns of the Centre for British Progress, argue that stricter copyright rules could stifle technological growth, offshore development, and damage the UK economy.

Why does it matter?

The debate reflects growing global tension between protecting intellectual property and enabling AI progress. With a key vote approaching in the House of Lords, artists are pressing for urgent action to secure a fair and sustainable path forward that upholds innovation and artistic integrity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Cybercriminals trick users with fake AI apps

Cybercriminals are tricking users into downloading a dangerous new malware called Noodlophile by disguising it as AI software. Rather than using typical phishing tactics, attackers create convincing fake platforms that appear to offer AI-powered tools for editing videos or images.

These are promoted through realistic-looking Facebook groups and viral social media posts, some of which have received over 62,000 views.

Users are lured with promises of AI-generated content and are directed to bogus sites, one of which pretends to be CapCut AI, offering video editing features. Once users upload prompts and attempt to download the content, they unknowingly receive a malicious ZIP file.

Inside, it is a disguised program that kicks off a chain of infections, eventually installing the Noodlophile malware. However, this software can steal browser credentials, crypto wallet details, and other sensitive data.

The malware is linked to a Vietnamese developer who identifies themselves as a ‘passionate Malware Developer’ on GitHub. Vietnam has a known history of cybercrime activity targeting social media platforms like Facebook.

In some cases, the Noodlophile Stealer has been bundled with remote access tools like XWorm, which allow attackers to maintain long-term control over victims’ systems.

This isn’t the first time attackers have used public interest in AI for malicious purposes. Meta removed over 1,000 dangerous links in 2023 that exploited ChatGPT’s popularity to spread malware.

Meanwhile, cybersecurity experts at CYFIRMA have reported another threat: a new, simple yet effective malware called PupkinStealer, which secretly sends stolen information to hackers using Telegram bots.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Scale AI expands into Saudi Arabia and UAE

Scale AI, a San Francisco-based startup backed by Amazon, plans to open a new office in Riyadh by the end of the year as part of its broader Middle East expansion.

The company also intends to establish a presence in the United Arab Emirates, although it has yet to confirm the timeline for that move.

Trevor Thompson, the company’s global managing director, said the Gulf is among the fastest-growing regions for AI adoption outside of the US and China.

Gulf states like Saudi Arabia have been investing heavily in tech startups, data centres and computing infrastructure, urging companies to set up local operations and create regional jobs. Salesforce, for instance, has already begun hiring for a $500 million investment in the kingdom.

Founded in 2016, Scale AI provides data-labelling services essential for training AI products, relying on a vast network of contract workers. Its clients include OpenAI and Microsoft.

The company hit a $13.8 billion valuation last year after a $1 billion funding round backed by Amazon, Meta and others.

In 2024, it generated about $870 million in revenue and is reportedly in talks for a deal that could nearly double its value.

Scale AI is also strengthening its regional ties. In February, it signed a five-year agreement with Qatar to enhance public services, followed by a partnership with Abu Dhabi-based Inception in March.

The news coincides with former President Donald Trump’s upcoming visit to Saudi Arabia, where his team is considering lifting export controls on advanced AI chips, potentially boosting the Gulf’s access to cutting-edge technology.

Notably, Scale AI’s former managing director, Michael Kratsios, now advises Trump on tech matters.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!