Anthropic settles $1.5 billion copyright case with authors

The AI startup, Anthropic, has agreed to pay $1.5 billion to settle a copyright lawsuit accusing the company of using pirated books to train its Claude AI chatbot.

The proposed deal, one of the largest of its kind, comes after a group of authors claimed the startup deliberately downloaded unlicensed copies of around 500,000 works.

According to reports, Anthropic will pay about $3,000 per book and add interest while agreeing to destroy datasets containing the material. A California judge will review the settlement terms on 8 September before finalising them.

Lawyers for the plaintiffs described the outcome as a landmark, warning that using pirated websites for AI training is unlawful.

The case reflects mounting legal pressure on the AI industry, with companies such as OpenAI and Microsoft also facing copyright disputes. The settlement followed a June ruling in which a judge said using the books to train Claude was ‘transformative’ and qualified as fair use.

Anthropic said the deal resolves legacy claims while affirming its commitment to safe AI development.

Despite the legal challenges, Anthropic continues to grow rapidly. Earlier in August, the company secured $13 billion in funding for a valuation of $183 billion, underlining its rise as one of the fastest-growing players in the global technology sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google avoids breakup as court ruling fuels AI Mode expansion

A US district judge has declined to order a breakup of Google, softening the blow of a 2024 ruling that found the company had illegally monopolised online search.

The decision means Google can press ahead with its shift from a search engine into an answer engine, powered by generative AI.

Google’s AI Mode replaces traditional blue links with direct responses to queries, echoing the style of ChatGPT. While the feature is optional for now, it could become the default.

That alarms publishers, who depend on search traffic for advertising revenue. Studies suggest chatbots reduce referral clicks by more than 90 percent, leaving many sites at risk of collapse.

Google is also experimenting with inserting ads into AI Mode, though it remains unclear how much revenue will flow to content creators. Websites can block their data from being scraped, but doing so would also remove them from Google search entirely.

Despite these concerns, Google argues that competition from ChatGPT, Perplexity, and other AI tools shows that new rivals are reshaping the search landscape.

The judge even cited the emergence of generative AI as a factor that altered the case against Google, underlining how the rise of AI has become central to the future of the internet.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI boss, Sam Altman, fuels debate over dead internet theory

Sam Altman, chief executive of OpenAI, has suggested that the so-called ‘dead internet theory’ may hold some truth. The idea, long dismissed as a conspiracy theory, claims much of the online world is now dominated by computer-generated content rather than real people.

Altman noted on X that he had not previously taken the theory seriously but believed there were now many accounts run by large language models.

His remark drew criticism from users who argued that OpenAI itself had helped create the problem by releasing ChatGPT in 2022, which triggered a surge of automated content.

The spread of AI systems has intensified debate over whether online spaces are increasingly filled with artificially generated voices.

Some observers also linked Altman’s comments to his work on World Network, formerly Worldcoin, a project launched in 2019 to verify human identity online through biometric scans. That initiative has been promoted as a potential safeguard against the growing influence of AI-driven systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud study shows AI agents driving global business growth

A new Google Cloud study indicates that more than half of global enterprises are already using AI agents, with many reporting consistent revenue growth and faster return on investment.

The research, based on a survey of 3,466 executives across 24 countries, suggests agentic AI is moving from trial projects to large-scale deployment.

The findings by Google Cloud reveal that 52% of executives said their organisations actively use AI agents, while 39% reported launching more than ten. A group of early adopters, representing 13% of respondents, have gone further by dedicating at least half of their future AI budgets to agentic AI.

These companies are embedding agents across operations and are more likely to report returns in customer service, marketing, cybersecurity and software development.

The report also highlights how industries are tailoring adoption. Financial services focus on fraud detection, retail uses agents for quality control, and telecom operators apply them for network automation.

Regional variations are notable: European companies prioritise tech support, Latin American firms lean on marketing, while Asia-Pacific enterprises emphasise customer service.

Although enthusiasm is strong, challenges remain. Executives cited data privacy, security and integration with existing systems as key concerns.

Google Cloud executives said that early adopters are not only automating tasks but also reshaping business processes, with 2025 expected to mark a shift towards embedding AI intelligence directly into operations.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase relies on AI for nearly half of its code

Coinbase CEO Brian Armstrong said AI now generates around 40 per cent of the exchange’s code, expected to surpass 50 per cent by October 2025. He emphasised that human oversight remains essential, as AI cannot be uniformly applied across all areas of the platform.

Armstrong confirmed that engineers were instructed to adopt AI development tools within a week, with those resisting the mandate dismissed. The move places Coinbase ahead of technology giants such as Microsoft and Google, which use AI for roughly 30 per cent of their code.

Security experts have raised concerns about the heavy reliance on AI. Industry figures warn that AI-generated code could contain bugs or miss critical context, posing risks for a platform holding over $420 billion in digital assets.

Larry Lyu called the strategy ‘a giant red flag’ for security-sensitive businesses.

Supporters argue that Coinbase’s approach is measured. Richard Wu of Tensor said AI could generate up to 90 per cent of high-quality code within five years if paired with thorough review and testing, similar to junior engineer errors.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Perplexity AI teams up with PayPal for fintech expansion

PayPal has partnered with Perplexity AI to provide PayPal and Venmo users in the US and select international markets with a free 12-month Perplexity Pro subscription and early access to the AI-powered Comet browser.

The $200 subscription allows unlimited queries, file uploads and advanced search features, while Comet offers natural language browsing to simplify complex tasks.

Industry analysts see the initiative as a way for PayPal to strengthen its position in fintech by integrating AI into everyday digital payments.

By linking accounts, users gain access to AI tools and cash back incentives and subscription management features, signalling a push toward what some describe as agentic commerce, where AI assistants guide financial and shopping decisions.

The deal also benefits Perplexity AI, a rising search and browser market challenger. Exposure to millions of PayPal customers could accelerate the adoption of its technology and provide valuable data for refining models.

Analysts suggest the partnership reflects a broader trend of payment platforms evolving into service hubs that combine transactions with AI-driven experiences.

While enthusiasm is high among early users, concerns remain about data privacy and regulatory scrutiny over AI integration in finance.

Market reaction has been positive, with PayPal shares edging upward following the announcement. Observers believe such alliances will shape the next phase of digital commerce, where payments, browsing, and AI capabilities converge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK factories closed as cyberattack disrupts Jaguar Land Rover

Jaguar Land Rover (JLR) has ordered factory staff to work from home until at least next Tuesday as it recovers from a major cyberattack. Production remains suspended at key UK sites, including Halewood, Solihull, and Wolverhampton.

The disruption, first reported earlier this week, has ‘severely impacted’ production and sales, according to JLR. Reports suggest that assembly line workers have been instructed not to return before 9 September, while the situation remains under review.

The hack has hit operations beyond manufacturing, with dealerships unable to order parts and some customer handovers delayed. The timing is particularly disruptive, coinciding with the September release of new registration plates, which traditionally boosts demand.

A group of young hackers on Telegram, calling themselves Scattered Lapsus$ Hunters, has claimed responsibility for the incident. Linked to earlier attacks on Marks & Spencer and Harrods, the group reportedly shared screenshots of JLR’s internal IT systems as proof.

The incident follows a wider spate of UK retail and automotive cyberattacks this year. JLR has stated that it is working quickly to restore systems and emphasised that there is ‘no evidence’ that customer data has been compromised.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Fintech CISO says AI is reshaping cybersecurity skills

Financial services firms are adapting rapidly to the rise of AI in cybersecurity, according to David Ramirez, CISO at Broadridge. He said AI is changing the balance between attackers and defenders while also reshaping the skills security teams require.

On the defensive side, AI is already streamlining governance, risk management and compliance tasks, while also speeding up incident detection and training. He highlighted its growing role in areas like access management and data loss prevention.

He also stressed the importance of aligning cyber strategy with business goals and improving board-level visibility. While AI tools are advancing quickly, he urged CISOs not to lose sight of risk assessments and fundamentals in building resilient systems.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

EASA survey reveals cautious optimism over aviation AI ethics

The European Union Aviation Safety Agency (EASA) has published survey results probing the ethical outlook of aviation professionals on AI deployment, released during its AI Days event in Cologne.

The AI Days conference gathered nearly 200 on-site attendees from across the globe, with even more participating online.

The survey measured acceptance, trust and comfort across eight hypothetical AI use cases, yielding an average acceptance score of 4.4 out of 7. Despite growing interest, two-thirds of respondents declined at least one scenario.

Their key concerns included limitations of AI performance, privacy and data protection, accountability, safety risks and the potential for workforce de-skilling. A clear majority called for stronger regulation and oversight by EASA and national authorities.

In a keynote address, Christine Berg from the European Commission highlighted that AI in aviation is already practical, optimising air traffic flow and predictive maintenance, while emphasising the need for explainable, reliable and certifiable systems under the EU AI Act.

Survey findings will feed into EASA’s AI Roadmap and prompt public consultations as the agency advances policy and regulatory frameworks.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

WhatsApp fixes flaw exploited in Apple device hacks

WhatsApp has fixed a vulnerability that exposed Apple device users to highly targeted cyberattacks. The flaw was chained with an iOS and iPadOS bug, allowing hackers to access sensitive data.

According to researchers at Amnesty’s Security Lab, the malicious campaign lasted around 90 days and impacted fewer than 200 people. WhatsApp notified victims directly, which urged all users to update their apps immediately.

Apple has also acknowledged the issue and released security patches to close the cybersecurity loophole. Experts warn that other apps beyond WhatsApp may have been exploited in the same campaign.

The identity of those behind the spyware attacks remains unclear. Both companies have stressed that prompt updates are the best protection for users against similar threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot