Musk–Altman clash escalates over Apple’s alleged AI bias

Elon Musk has accused Apple of favouring ChatGPT on its App Store and threatened legal action, sparking a clash with OpenAI CEO Sam Altman. Musk called Apple’s practices an antitrust violation and vowed to take immediate action through his AI company, xAI.

Critics on X noted rivals like DeepSeek AI and Perplexity AI have topped the App Store this year. Altman called Musk’s claim ‘remarkable’ and accused him of manipulating X. Musk called him a ‘liar’, prompting demands for proof he never altered X’s algorithm.

OpenAI and xAI launched new versions of ChatGPT and Grok, ranked first and fifth among free iPhone apps on Tuesday. Apple, which partnered with OpenAI in 2024 to integrate ChatGPT, did not comment on the matter. Rankings take into account engagement, reviews, and downloads.

The dispute reignites a feud between Musk and OpenAI, which he co-founded but left before the success of ChatGPT. In April, OpenAI accused Musk of attempting to harm the company and establish a rival. Musk launched xAI in 2023 to compete with major players in the AI space.

Chinese startup DeepSeek has disrupted the AI market with cost-efficient models. Since ChatGPT’s 2022 debut, major tech firms have invested billions in AI. OpenAI claims Musk’s actions are driven by ambition rather than a mission for humanity’s benefit.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Musk faces an OpenAI harassment lawsuit after a judge rejects dismissal

A federal judge has rejected Elon Musk’s bid to dismiss claims that he engaged in a ‘years-long harassment campaign’ against OpenAI.

US District Judge Yvonne Gonzalez Rogers ruled that the company’s counterclaims are sufficient to proceed as part of the lawsuit Musk filed against OpenAI and its CEO, Sam Altman, last year.

Musk, who helped found OpenAI in 2015, sued the AI firm in August 2024, alleging Altman misled him about the company’s commitment to AI safety before partnering with Microsoft and pursuing for-profit goals.

OpenAI responded with counterclaims in April, accusing Musk of persistent attacks in the press and on his platform X, demands for corporate records, and a ‘sham bid’ for the company’s assets.

The filing alleged that Musk sought to undermine OpenAI instead of supporting humanity-focused AI, intending to build a rival to take the technological lead.

The feud between Musk and Altman has continued, most recently with Musk threatening to sue Apple over App Store listings for X and his AI chatbot Grok. Altman dismissed the claim, criticising Musk for allegedly manipulating X to benefit his companies and harm competitors.

Despite the ongoing legal battle, OpenAI says it will remain focused on product development instead of engaging in public disputes.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UK-based ODI outlines vision for EU AI Act and data policy

The Open Data Institute (ODI) has published a manifesto setting out six principles for shaping European Union policy on AI and data. Aimed at supporting policymakers, it aligns with the EU’s upcoming digital reforms, including the AI Act and the review of the bloc’s digital framework.

Although based in the UK, the ODI has previously contributed to EU policymaking, including work on the General-Purpose AI Code of Practice and consultations on the use of health data. The organisation also launched a similar manifesto for UK data and AI policy in 2024.

The ODI states that the EU has a chance to establish a global model of digital governance, prioritizing people’s interests. Director of research Elena Simperl called for robust open data infrastructure, inclusive participation, and independent oversight to build trust, support innovation, and protect values.

Drawing on the EU’s Competitiveness Compass and the Draghi report, the six principles are: data infrastructure, open data, trust, independent organisations, an inclusive data ecosystem, and data skills. The goal is to balance regulation and innovation while upholding rights, values, and interoperability.

The ODI highlights the need to limit bias and inequality, broaden access to data and skills, and support smaller enterprises. It argues that strong governance should be treated like physical infrastructure, enabling competitiveness while safeguarding rights and public trust in the AI era.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI browsers accused of harvesting sensitive data, according to new study

A new study from researchers in the UK and Italy found that popular AI-powered browsers collect and share sensitive personal data, often in ways that may breach privacy laws.

The team tested ten well-known AI assistants, including ChatGPT, Microsoft’s Copilot, Merlin AI, Sider, and TinaMind, using public websites and private portals like health and banking services.

All but Perplexity AI showed evidence of gathering private details, from medical records to social security numbers, and transmitting them to external servers.

The investigation revealed that some tools continued tracking user activity even during private browsing, sending full web page content, including confidential information, to their systems.

Sometimes, prompts and identifying details, like IP addresses, were shared with analytics platforms, enabling potential cross-site tracking and targeted advertising.

Researchers also found that some assistants profiled users by age, gender, income, and interests, tailoring their responses across multiple sessions.

According to the report, such practices likely violate American health privacy laws and the European Union’s General Data Protection Regulation.

Privacy policies for some AI browsers admit to collecting names, contact information, payment data, and more, and sometimes storing information outside the EU.

The study warns that users cannot be sure how their browsing data is handled once gathered, raising concerns about transparency and accountability in AI-enhanced browsing.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk threatens legal action against Apple over AI App rankings

Elon Musk has announced plans to sue Apple, accusing the company of unfairly favouring OpenAI’s ChatGPT over his xAI app Grok on the App Store.

Musk claims that Apple’s ranking practices make it impossible for any AI app except OpenAI’s to reach the top spot, calling this behaviour an ‘unequivocal antitrust violation’. ChatGPT holds the number one position on Apple’s App Store, while Grok ranks fifth.

Musk expressed frustration on social media, questioning why his X app, which he describes as ‘the number one news app in the world,’ has not received higher placement. He suggested that Apple’s ranking decisions might be politically motivated.

The dispute highlights growing tensions as AI companies compete for prominence on major platforms.

Apple and Musk’s xAI have not responded yet to requests for comment.

The controversy unfolds amid increasing scrutiny of App Store policies and their impact on competition, especially within the fast-evolving AI sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musk and OpenAI CEO Altman clash over Apple and X

After Elon Musk accused Apple of favouring OpenAI’s ChatGPT over other AI applications on the App Store, there was a strong response from OpenAI CEO Sam Altman.

Altman alleged that Musk manipulates the social media platform X for his benefit, targeting competitors and critics. The exchange adds to their history of public disagreements since Musk left OpenAI’s board in 2018.

Musk’s claim centres on Apple’s refusal to list X or Grok (XAI’s AI app) in the App Store’s ‘Must have’ section, despite X being the top news app worldwide and Grok ranking fifth.

Although Musk has not provided evidence for antitrust violations, a recent US court ruling found Apple in contempt for restricting App Store competition. The EU also fined Apple €500 million earlier this year over commercial restrictions on app developers.

OpenAI’s ChatGPT currently leads the App Store’s ‘Top Free Apps’ list for iPhones in the US, while Grok holds the fifth spot. Musk’s accusations highlight ongoing tensions in the AI industry as big tech companies battle for app visibility and market dominance.

The situation emphasises how regulatory scrutiny and legal challenges shape competition within the digital economy.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US charges four over global romance scam and BEC scheme

Four Ghanaian nationals have been extradited to the United States over an international cybercrime scheme that stole more than $100 million, allegedly through sophisticated romance scams and business email compromise (BEC) attacks targeting individuals and companies nationwide.

The syndicate, led by Isaac Oduro Boateng, Inusah Ahmed, Derrick van Yeboah, and Patrick Kwame Asare, used fake romantic relationships and email spoofing to deceive victims. Businesses were targeted by altering payment details to divert funds.

US prosecutors say the group maintained a global infrastructure, with command and control elements in West Africa. Stolen funds were laundered through a hierarchical network to ‘chairmen’ who coordinated operations and directed subordinate operators executing fraud schemes.

Investigators found the romance scams used detailed victim profiling, while BEC attacks monitored transactions and swapped banking details. Multiple schemes ran concurrently under strict operational security to avoid detection.

Following their extradition, three suspects arrived in the United States on 7 August 2025, arranged through cooperation between US authorities and the Economic and Organised Crime Office of Ghana.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

US Judiciary confirms cyberattack, moves to reinforce systems

The US Judiciary has confirmed suffering a cyberattack and says it is reinforcing systems to prevent further breaches. In a press release, it described ‘escalated cyberattacks of a sophisticated and persistent nature’ targeting its case management system and sensitive files.

Most documents in the judiciary’s electronic system are public; however, some contain confidential or proprietary information that is sealed from public view. The documents, it warned, are of interest to threat actors, prompting courts to introduce stricter controls on access under monitored conditions.

The Administrative Office of the US Courts is collaborating with Congress, the Department of Justice, the Department of Homeland Security, and other relevant agencies on security measures. No details were given on the exact methods of reinforcement.

The US court system has been a frequent target of cybercrime. Previous incidents include a 2020 federal court breach, a 2024 attack on Washington state courts, and a ransomware strike on the Los Angeles Superior Court in summer 2024.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

UAE Ministry of Interior uses AI and modern laws to fight crime

The UAE Ministry of Interior states that AI, surveillance, and modern laws are key to fighting crime. Offences are economic, traditional, or cyber, with data tools and legal updates improving investigations. Cybercrime is on the rise as digital technology expands.

Current measures include AI monitoring, intelligent surveillance, and new laws. Economic crimes like fraud and tax evasion are addressed through analytics and banking cooperation. Cross-border cases and digital evidence tampering continue to be significant challenges.

Traditional crimes, such as theft and assault, are addressed through cameras, patrols, and awareness drives. Some offences persist in remote or crowded areas. Technology and global cooperation have improved results in several categories.

UAE officials warn that AI and the internet of Things will lead to more sophisticated cyberattacks. Future risks include evolving criminal tactics, privacy threats, skills shortages, and balancing security and individual rights.

Opportunities include AI-powered security, stronger global ties, and better cybersecurity. Dubai Police have launched a bilingual platform to educate the public, viewing awareness as the first defence against online threats.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Colorado’s AI law under review amid budget crisis

Colorado lawmakers face a dual challenge as they return to the State Capitol on 21 August for a special session: closing a $1.2 billion budget shortfall and revisiting a pioneering yet controversial law regulating AI.

Senate Bill 24-205, signed into law in May 2024, aims to reduce bias in AI decision-making affecting areas such as lending, insurance, education, and healthcare. While not due for implementation until February 2026, critics and supporters now expect that deadline to be extended.

Representative Brianna Titone, one of the bill’s sponsors, emphasised the importance of transparency and consumer safeguards, warning of the risks associated with unregulated AI. However, unexpected costs have emerged. State agencies estimate implementation could cost up to $5 million, a far cry from the bill’s original fiscal note.

Governor Polis has called for amendments to prevent excessive financial and administrative burdens on state agencies and businesses. The Judicial Department now expects costs to double from initial projections, requiring supplementary budget requests.

Industry concerns centre on data-sharing requirements and vague regulatory definitions. Critics argue the law could erode competitive advantage and stall innovation in the United States. Developers are urging clarity and more time before compliance is enforced.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!