ACAI and Universal AI University partner to boost AI innovation in Qatar

The Arab Centre for Artificial Intelligence (ACAI) and India’s Universal AI University (UAI) have partnered through a Memorandum of Understanding (MoU) to accelerate the advancement of AI across Qatar and the broader region. That collaboration aims to enhance education, research, and innovation in AI and emerging technologies.

Together, ACAI and UAI plan to establish a specialised AI research centre and develop advanced training programs to cultivate local expertise. They will also launch various online and short-term educational courses designed to address the growing demand for skilled AI professionals in Qatar’s job market, ensuring that the workforce is well-prepared for future technological developments.

Looking forward, the partnership envisions creating a dedicated AI-focused university campus. The initiative aligns with Qatar’s vision to transition into a knowledge-based economy by fostering innovation and offering academic programs in AI, engineering, business administration, environmental sustainability, and other emerging technologies.

The MoU is valid for ten years and includes provisions for dispute resolution, intellectual property rights management, and annual reviews to ensure tangible and sustainable outcomes. Further detailed implementation agreements are expected to formalise the partnership’s operational aspects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK research body hit by 5 million cyber attacks

UK Research and Innovation (UKRI), the country’s national funding body for science and research, has reported a staggering 5.4 million cyber attacks this year — a sixfold increase compared to the previous year.

According to data obtained through freedom of information requests, the majority of these threats were phishing attempts, with 236,400 designed to trick employees into revealing sensitive data. A further 11,200 were malware-based attacks, while the rest were identified as spam or malicious emails.

The scale of these incidents highlights the growing threat faced by both public and private sector institutions. Experts believe the rise of AI has enabled cybercriminals to launch more frequent and sophisticated attacks.

Rick Boyce, chief for technology at AND Digital, warned that the emergence of AI has introduced threats ‘at a pace we’ve never seen before’, calling for a move beyond traditional defences to stay ahead of evolving risks.

UKRI, which is sponsored by the Department for Science, Innovation and Technology, manages an annual budget of £8 billion, much of it invested in cutting-edge research.

A budget like this makes it an attractive target for cybercriminals and state-sponsored actors alike, particularly those looking to steal intellectual property or sabotage infrastructure. Security experts suggest the scale and nature of the attacks point to involvement from hostile nation states, with Russia a likely culprit.

Though UKRI cautioned that differing reporting periods may affect the accuracy of year-on-year comparisons, there is little doubt about the severity of the threat.

The UK’s National Cyber Security Centre (NCSC) has previously warned of Russia’s Unit 29155 targeting British government bodies and infrastructure for espionage and disruption.

With other notorious groups such as Fancy Bear and Sandworm also active, the cybersecurity landscape is becoming increasingly fraught.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Jersey artists push back against AI art

A Jersey illustrator has spoken out against the growing use of AI-generated images, calling the trend ‘heartbreaking’ for artists who fear losing their livelihoods to technology.

Abi Overland, known for her intricate hand-drawn illustrations, said it was deeply concerning to see AI-created visuals being shared online without acknowledging their impact on human creators.

She warned that AI systems often rely on artists’ existing work for training, raising serious questions about copyright and fairness.

Overland stressed that these images are not simply a product of new tools but of years of human experience and emotion, something AI cannot replicate. She believes the increasing normalisation of AI content is dangerous and could discourage aspiring artists from entering the field.

Fellow Jersey illustrator Jamie Willow echoed the concern, saying many local companies are already replacing human work with AI outputs, undermining the value of art created with genuine emotional connection and moral integrity.

However, not everyone sees AI as a threat. Sebastian Lawson of Digital Jersey argued that artists could instead use AI to enhance their creativity rather than replace it. He insisted that human creators would always have an edge thanks to their unique insight and ability to convey meaning through their work.

The debate comes as the House of Lords recently blocked the UK government’s data bill for a second time, demanding stronger protections for artists and musicians against AI misuse.

Meanwhile, government officials have said they will not consider any copyright changes unless they are sure such moves would benefit creators as well as tech companies.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI Darth Vader in Fortnite sparks union dispute

The use of an AI-generated Darth Vader voice in Fortnite has triggered a legal dispute between SAG-AFTRA and Epic Games.

According to GamesIndustry.biz, the actors’ union filed an unfair labor practice complaint, claiming it was not informed or consulted about the decision to use an artificial voice model in the game.

In Fortnite’s Galactic Battle season, players who defeat Darth Vader in Battle Royale can recruit him, triggering limited voice interactions powered by conversational AI.

The voice used stems from a licensing agreement with the estate of James Earl Jones, who retired in 2022 and granted rights for AI use of his iconic performance.

While Epic Games has confirmed it had legal permission to use Jones’ voice, SAG-AFTRA alleges the company bypassed union protocols by not informing them or offering the role to a human actor.

The outcome of this dispute could have broader implications for how AI voices are integrated into video games and media going forward, particularly regarding labor rights and union oversight.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Lords reject UK AI copyright bill again

The UK government has suffered a second defeat in the House of Lords over its Data (Use and Access) Bill, as peers once again backed a copyright-focused amendment aimed at protecting artists from AI content scraping.

Baroness Kidron, a filmmaker and digital rights advocate, led the charge, accusing ministers of listening to the ‘sweet whisperings of Silicon Valley’ and allowing tech firms to ‘redefine theft’ by exploiting copyrighted material without permission.

Her amendment would force AI companies to disclose their training data sources and obtain consent from rights holders.

The government had previously rejected this amendment, arguing it would lead to ‘piecemeal’ legislation and pre-empt ongoing consultations.

But Kidron’s position was strongly supported across party lines, with peers calling the current AI practices ‘burglary’ and warning of catastrophic damage to the UK’s creative sector.

High-profile artists like Sir Elton John, Paul McCartney, Annie Lennox, and Kate Bush have condemned the government’s stance, with Sir Elton branding ministers ‘losers’ and accusing them of enabling theft.

Peers from Labour, the Lib Dems, the Conservatives, and the crossbenches united to defend UK copyright law, calling the government’s actions a betrayal of the country’s leadership in intellectual property rights.

Labour’s Lord Brennan warned against a ‘double standard’ for AI firms, while Lord Berkeley insisted immediate action was needed to prevent long-term harm.

Technology Minister Baroness Jones countered that no country has resolved the AI-copyright dilemma and warned that the amendment would only create more regulatory confusion.

Nonetheless, peers voted overwhelmingly in favour of Kidron’s proposal—287 to 118—sending the bill back to the Commons with a strengthened demand for transparency and copyright safeguards.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI hallucination at center of Anthropic copyright lawsuit

Anthropic, the AI company behind the Claude chatbot, has been ordered by a federal judge to respond to allegations that it submitted fabricated material—possibly generated by AI—as part of its defense in an ongoing copyright lawsuit.

The lawsuit, filed in October 2023 by music publishers Universal Music Group, Concord, and ABKCO, accuses Anthropic of unlawfully using lyrics from over 500 songs to train its chatbot. The publishers argue that Claude can produce copyrighted material when prompted, such as lyrics from Don McLean’s American Pie.

During a court hearing on Tuesday in California, the publishers’ attorney claimed that an Anthropic data scientist cited a nonexistent academic article from The American Statistician journal to support the argument that Claude rarely outputs copyrighted lyrics.

One of the article’s alleged authors later confirmed the paper was a ‘complete fabrication.’ The judge is now requiring Anthropic to formally address the incident in court.

The company, founded in 2021, is backed by major investors including Amazon, Google, and Sam Bankman-Fried, the disgraced crypto executive convicted of fraud in 2023.

The case marks a significant test of how AI companies handle copyrighted content, and how courts respond when AI-generated material is used in legal proceedings.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US Copyright Office avoids clear decision on AI and fair use

The US Copyright Office has stopped short of deciding whether AI companies can legally use copyrighted material to train their systems under fair use.

Its newly released report acknowledges that some uses—such as non-commercial research—may qualify, while others, like replicating expressive works from pirated content to produce market-ready AI output, likely won’t.

Rather than offering a definitive answer, the Office said such cases must be assessed by the courts, not through a universal standard.

The latest report is the third in a series aimed at guiding how copyright law applies to AI-generated content. It reiterates that works entirely created by AI cannot be copyrighted, but human-edited outputs might still qualify.

The 108-page document focuses heavily on whether AI training methods transform content enough to justify legal protection, and whether they harm creators’ livelihoods through lost sales or diluted markets.

Instead of setting new policy, the Office highlights existing legal principles, especially the four factors of fair use: the purpose, the nature of the work, the amount used, and the impact on the original market.

It notes that AI-generated content can sometimes alter original works meaningfully, but when styles or outputs closely resemble protected material, legal risks remain. Tools like content filters are seen as helpful in preventing infringement, even though they’re not always reliable.

The timing of the report has been overshadowed by political turmoil. President Donald Trump reportedly dismissed both the Librarian of Congress and the head of the Copyright Office days before the report’s release.

Meanwhile, creators continue urging the government not to permit fair use in AI training, arguing it threatens the value of original work. The debate is now expected to unfold further in courtrooms instead of regulatory offices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Alphabet stock dips as AI tools begin to dent Google search volumes

Alphabet shares fell sharply on Wednesday following courtroom testimony that Google searches on Apple’s Safari browser declined in April—reportedly for the first time ever.

Apple’s senior executive Eddy Cue said the drop came as users increasingly turned to AI tools like ChatGPT and Perplexity instead of traditional search engines.

The market reaction was swift, with Alphabet losing ground before partially recovering after Google clarified that overall search volumes remain on the rise.

Several analysts argued the sell-off may have been exaggerated, noting Apple’s incentive to downplay Google’s dominance as the companies face antitrust scrutiny. In 2022, Google reportedly paid Apple $20 billion to remain Safari’s default search provider.

Still, some analysts warn of a longer-term shift. Tech veteran Gene Munster called it the ‘beginning of the decline’, arguing that the way people find information is undergoing a fundamental change. Unlike search results pages, AI assistants provide direct answers—undermining Google’s ad-driven revenue model.

While Alphabet still owns a broad portfolio including YouTube, Android, Google Cloud and autonomous driving company Waymo, its core business is facing structural headwinds.

Investors are already adjusting expectations. Alphabet’s price-to-earnings ratio has dropped to 18, down from a 10-year average of 28, reflecting growing concerns around disruption.

Some see an opportunity; others, a reckoning. Whether this moment marks a short-term dip or a longer-term revaluation will depend on how Google adapts to the AI-driven shift in how people search for—and monetise—information.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK artists urge PM to shield creative work from AI exploitation

More than 400 prominent British artists, including Dua Lipa, Elton John, and Sir Ian McKellen, have signed a letter urging Prime Minister Keir Starmer to update UK copyright laws to protect their work from being used without consent in training AI systems. The signatories argue that current laws leave their creative output vulnerable to exploitation by tech companies, which could ultimately undermine the UK’s status as a global cultural leader.

The artists are backing a proposed amendment to the Data (Use and Access) Bill by Baroness Beeban Kidron, requiring AI developers to disclose when and how they use copyrighted materials. They believe this transparency could pave the way for licensing agreements that respect the rights of creators while allowing responsible AI development.

Nobel laureate Kazuo Ishiguro and music legends like Paul McCartney and Kate Bush have joined the call, warning that creators risk ‘giving away’ their life’s work to powerful tech firms. While the government insists it is consulting all parties to ensure a balanced outcome that supports both the creative sector and AI innovation, not everyone supports the amendment.

Critics, like Julia Willemyns of the Centre for British Progress, argue that stricter copyright rules could stifle technological growth, offshore development, and damage the UK economy.

Why does it matter?

The debate reflects growing global tension between protecting intellectual property and enabling AI progress. With a key vote approaching in the House of Lords, artists are pressing for urgent action to secure a fair and sustainable path forward that upholds innovation and artistic integrity.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

OpenAI partners with major news outlets

OpenAI has signed multiple content-sharing deals with major media outlets, including Politico, Vox, Wired, and Vanity Fair, allowing their content to be featured in ChatGPT.

As part of the deal with The Washington Post, ChatGPT will display summaries, quotes, and links to the publication’s original reporting in response to relevant queries. OpenAI has secured similar partnerships with over 20 news publishers and 160 outlets in 20 languages.

The Washington Post’s head of global partnerships, Peter Elkins-Williams, emphasised the importance of meeting audiences where they are, ensuring ChatGPT users have access to impactful reporting.

OpenAI’s media partnerships head, Varun Shetty, noted that more than 500 million people use ChatGPT weekly, highlighting the significance of these collaborations in providing timely, trustworthy information to users.

OpenAI has worked to avoid criticism related to copyright infringement, having previously faced legal challenges, particularly from the New York Times, over claims that chatbots were trained on millions of articles without permission.

While OpenAI sought to dismiss these claims, a US district court allowed the case to proceed, intensifying scrutiny over AI’s use of news content.

Despite these challenges, OpenAI continues to form agreements with leading publications, such as Hearst, Condé Nast, Time magazine, and Vox Media, helping ensure their journalism reaches a wider audience.

Meanwhile, other publications have pursued legal action against AI companies like Cohere for allegedly using their content without consent to train AI models.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!