Musk threatens legal action against Apple over AI App rankings

Elon Musk has announced plans to sue Apple, accusing the company of unfairly favouring OpenAI’s ChatGPT over his xAI app Grok on the App Store.

Musk claims that Apple’s ranking practices make it impossible for any AI app except OpenAI’s to reach the top spot, calling this behaviour an ‘unequivocal antitrust violation’. ChatGPT holds the number one position on Apple’s App Store, while Grok ranks fifth.

Musk expressed frustration on social media, questioning why his X app, which he describes as ‘the number one news app in the world,’ has not received higher placement. He suggested that Apple’s ranking decisions might be politically motivated.

The dispute highlights growing tensions as AI companies compete for prominence on major platforms.

Apple and Musk’s xAI have not responded yet to requests for comment.

The controversy unfolds amid increasing scrutiny of App Store policies and their impact on competition, especially within the fast-evolving AI sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Fintiv accuses Apple for allegedy stealing its trade secrets

US Texas-based company Fintiv, which provides mobile financial services, has filed a legal complaint against Apple, alleging that it stole trade secrets from its predecessor, CorFire, to develop Apple Pay.

The complaint claims that Apple employees held several meetings with CorFire to discuss the implementation of CorFire’s mobile wallet solutions and that CorFire had uploaded proprietary information to a shared site maintained by Apple.

According to the lawsuit, the two companies signed a non-disclosure agreement (NDA), giving Apple access to CorFire’s confidential information, which Apple allegedly misappropriated after abandoning plans for a partnership.

According to the IPWatchdog, Fintiv has been involved in ongoing litigation over its mobile wallet patents. Recently, it lost two key appeals: one against PayPal, which upheld the dismissal of its patent infringement claims, and another against Apple, in which the court confirmed specific patent claims were invalid.

However, in May, Fintiv secured a partial victory when the Federal Circuit reversed a lower court’s ruling that Apple did not infringe one of its patents (US Patent No. 8,843,125), allowing that part of the case to proceed. Apple had not commented publicly as of the article’s publication.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Riverlane deploys Deltaflow 2 QEC in first UK commercial quantum integration

Riverlane has deployed its Deltaflow 2 quantum error correction (QEC) technology in a UK commercial quantum setting for the first time. The system introduces streaming quantum memory, enabling real-time error correction fast enough to preserve data across thousands of operations.

Deltaflow 2 combines a custom QEC chip with FPGA hardware and Riverlane’s software stack, supporting superconducting, spin, trapped-ion, and neutral-atom qubit platforms. It has been integrated with high-performance classical systems and a digital twin for noise simulation and monitoring.

Control hardware from Qblox delivers high-fidelity readout and ultra-low-latency links to enable real-time QEC. The deployment will validate error correction routines and benchmark system performance, forming the basis for future integration with OQC’s superconducting qubits.

The project is part of the UK Government-funded DECIDE programme, which aims to strengthen national capability in quantum error correction. Riverlane and OQC plan to demonstrate live QEC during quantum operations, supporting the creation of logical qubits for scalable systems.

Riverlane is also partnering with Infleqtion, Rigetti Computing, and others through the UK’s National Quantum Computing Centre. The company says growing industry demand reflects QEC’s shift from research to deployment, positioning Deltaflow 2 as a commercially viable, universally compatible tool.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Autonomous AI coding tool Jules now publicly available from Google

Google has released its autonomous coding agent Jules for free public use, offering AI-powered code generation, debugging, and optimisation. Built on the Gemini 2.5 Pro model, the tool completed a successful beta phase before entering general availability with both free and paid plans.

Jules supports a range of users, from developers to non-technical staff, automating tasks like building features or integrating APIs. The free version allows 15 tasks per day, while the Pro tier significantly raises the limits, providing access to more powerful tools.

Beta testing began in May 2025 and saw Jules process hundreds of thousands of tasks. Its new interface now includes visual explanations and bug fixes, refining usability. Integrated with GitHub and Gemini CLI, Jules can suggest optimisations, write tests, and even provide audio summaries.

Google positions Jules as a step beyond traditional code assistants by enabling autonomy. However, former researchers warn that oversight remains essential to avoid misuse, especially in sensitive systems where AI errors could be costly.

While its free tier may appeal to startups and hobbyists, concerns over code originality and job displacement persist. Nonetheless, Jules could reshape development workflows and lower barriers to coding for a much broader user base.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

News Corp CEO warns AI could ‘vandalise’ creativity and IP rights

News Corp chief executive Robert Thomson has warned that AI could damage creativity by undermining intellectual property rights.

At the company’s full-year results briefing in New York, he described the AI era as a historic turning point. He called for stronger protections to preserve America’s ‘comparative advantage in creativity’.

Thomson said allowing AI systems to consume and profit from copyrighted works without permission was akin to ‘vandalising virtuosity’.

He cited Donald Trump’s The Art of the Deal, published by News Corp’s book division, questioning whether it should be used to train AI that might undermine book sales. Despite the criticism, the company has rolled out its AI newsroom tools, NewsGPT and Story Cutter.

News Corp reported a two percent revenue rise to US$8.5 billion ($A13.1 billion), with net income from continuing operations climbing 71 percent to US$648 million.

Growth in the Dow Jones and REA Group segments offset news media subscriptions and advertising declines.

Digital subscribers fell across several mastheads, although The Times and The Sunday Times saw gains. Profitability in news media rose 15 percent, aided by editorial efficiencies and cost-cutting measures.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI news summaries to affect the future of journalism

Generative AI tools like ChatGPT significantly impact traditional online news by reducing search traffic to media websites.

As these AI assistants summarise news content directly in search results, users are less likely to click through to the sources, threatening already struggling publishers who depend on ad revenue and subscriptions.

A Pew Research Centre study found that when AI summaries appear in search, users click suggested links half as often as in traditional search formats.

Matt Karolian of the Boston Globe Media warns that the next few years will be especially difficult for publishers, urging them to adapt or risk being ‘swept away.’

While some, like the Boston Globe, have gained a modest number of new subscribers through ChatGPT, these numbers pale compared to other traffic sources.

To adapt, publishers are turning to Generative Engine Optimisation (GEO), tailoring content so AI tools can be used and cited more effectively. Some have blocked crawlers to prevent data harvesting, while others have reopened access to retain visibility.

Legal battles are unfolding, including a major lawsuit from The New York Times against OpenAI and Microsoft. Meanwhile, licensing deals between tech giants and media organisations are beginning to take shape.

With nearly 15% of under-25s now relying on AI for news, concerns are mounting over the credibility of information. As AI reshapes how news is consumed, the survival of original journalism and public trust in it face grave uncertainty.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare claims Perplexity circumvented website scraping blocks

Cloudflare has accused AI startup Perplexity of ignoring explicit website instructions not to scrape their content.

According to the internet infrastructure company, Perplexity has allegedly disguised its identity and used technical workarounds to bypass restrictions set out in Robots.txt files, which tell bots which pages they may or may not access.

The behaviour was reportedly detected after multiple Cloudflare customers complained about unauthorised scraping attempts.

Instead of respecting these rules, Cloudflare claims Perplexity altered its bots’ user agent to appear as a Google Chrome browser on macOS and switched its network identifiers to avoid detection.

The company says these tactics were seen across tens of thousands of domains and millions of daily requests, and that it used machine learning and network analysis to identify the activity.

Perplexity has denied the allegations, calling Cloudflare’s report a ‘sales pitch’ and disputing that the bot named in the findings belongs to the company. Cloudflare has since removed Perplexity’s bots from its verified list and introduced new blocking measures.

The dispute arises as Cloudflare intensifies its efforts to grant website owners greater control over AI crawlers. Last month, it launched a marketplace enabling publishers to charge AI firms for scraping, alongside free tools to block unauthorised data collection.

Perplexity has previously faced criticism over content use, with outlets such as Wired accusing it of plagiarism in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Hackers infiltrate Southeast Asian telecom networks

A cyber group breached telecoms across Southeast Asia, deploying advanced tracking tools instead of stealing data. Palo Alto Networks’ Unit 42 assesses the activity as ‘associated with a nation-state nexus’.

A hacking group gained covert access to telecom networks across Southeast Asia, most likely to track users’ locations, according to cybersecurity analysts at Palo Alto Networks’ Unit 42.

The campaign lasted from February to November 2024.

Instead of stealing data or directly communicating with mobile devices, the hackers deployed custom tools such as CordScan, designed to intercept mobile network protocols like SGSN. These methods suggest the attackers focused on tracking rather than data theft.

Unite42 assessed the activity ‘with high confidence’ as ‘associated with a nation state nexus’. The Unit notes that ‘this cluster heavily overlaps with activity attributed to Liminal Panda, a nation state adversary tracked by CrowdStrike’; according to CrowdStrike, Liminal Panda is considered to be a ‘likely China-nexus adversary’. It further states that ‘while this cluster significantly overlaps with Liminal Panda, we have also observed overlaps in attacker tooling with other reported groups and activity clusters, including Light Basin, UNC3886, UNC2891 and UNC1945.’

The attackers initially gained access by brute-forcing SSH credentials using login details specific to telecom equipment.

Once inside, they installed new malware, including a backdoor named NoDepDNS, which tunnels malicious data through port 53 — typically used for DNS traffic — in order to avoid detection.

To maintain stealth, the group disguised malware, altered file timestamps, disabled system security features and wiped authentication logs.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google AI Mode raises fears over control of news

Google’s AI Mode has quietly launched in the UK, reshaping how users access news by summarising information directly in search results.

By paraphrasing content gathered across the internet, the tool offers instant answers while reducing the need to visit original news sites.

Critics argue that the technology monopolises UK information by filtering what users see, based on algorithms rather than editorial judgement. Concerns have grown over transparency, fairness and the future of independent journalism.

Publishers are not compensated for content used by AI Mode, and most users rarely click through to the sources. Newsrooms fear pressure to adapt their output to align with Google’s preferences or risk being buried online.

While AI may streamline convenience, it lacks accountability. Regulated journalism must operate under legal frameworks, whereas AI faces no such scrutiny even when errors have real consequences.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Creative industries raise concerns over the EU AI Act

Organisations representing creative sectors have issued a joint statement expressing concerns over the current implementation of the EU AI Act, particularly its provisions for general-purpose AI systems.

The response focuses on recent documents, including the General Purpose AI Code of Practice, accompanying guidelines, and the template for training data disclosure under Article 53.

The signatories, drawn from music and broader creative industries, said they had engaged extensively throughout the consultation process. They now argue that the outcomes do not fully reflect the issues raised during those discussions.

According to the statement, the result does not provide the level of intellectual property protection that some had expected from the regulation.

The group has called on the European Commission to reconsider the implementation package and is encouraging the European Parliament and member states to review the process.

The original EU AI Act was widely acknowledged as a landmark regulation, with technology firms and creative industries closely watching its rollout across member countries.

Google confirmed that it will sign the General Purpose Code of Practice elsewhere. The company said the latest version supports Europe’s broader innovation goals more effectively than earlier drafts, but it also noted ongoing concerns.

These include the potential impact of specific requirements on competitiveness and handling trade secrets.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!