xAI accuses Apple and OpenAI of blocking competition in AI

Elon Musk’s xAI has filed a lawsuit in Texas accusing Apple and OpenAI of colluding to stifle competition in the AI sector.

The case alleges that both companies locked up markets to maintain monopolies, making it harder for rivals like X and xAI to compete.

The dispute follows Apple’s 2024 deal with OpenAI to integrate ChatGPT into Siri and other apps on its devices. According to the lawsuit, Apple’s exclusive partnership with OpenAI has prevented fair treatment of Musk’s products within the App Store, including the X app and xAI’s Grok app.

Musk previously threatened legal action against Apple over antitrust concerns, citing the company’s alleged preference for ChatGPT.

Musk, who acquired his social media platform X in a $45 billion all-stock deal earlier in the year, is seeking billions of dollars in damages and a jury trial. The legal action highlights Musk’s ongoing feud with OpenAI’s CEO, Sam Altman.

Musk, a co-founder of OpenAI who left in 2018 after disagreements with Altman, has repeatedly criticised the company’s shift to a profit-driven model. He is also pursuing separate litigation against OpenAI and Altman over that transition in California.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Honor and Google deepen platform partnership with longer updates and AI integration

Honor has announced a joint commitment with Google to strengthen its Android platform support. The company now guarantees six years of Android OS and security updates for its upcoming Honor 400 series, aligning with similar practices by Pixel and Samsung devices.

This update period is part of Honor’s wider Alpha Plan, a strategic framework positioning the company as an AI device ecosystem player.

Honor will invest US $10 billion over five years to support this transformation through hardware innovation, software longevity and AI agent integration.

The partnership enables deeper cooperation with Google around Android updates and AI features. Honor already integrates tools like Circle to Search, AI photo expansion and Gemini voice assistants on its Magic series. The extended software support promises longer device lifespans, reduced e-waste and improved user experience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Coinbase CEO fired engineers who refused to adopt AI tools

Coinbase CEO Brian Armstrong has revealed that he fired engineers who refused to begin using AI coding tools after the company adopted GitHub Copilot and Cursor. Armstrong shared the story during a podcast hosted by Stripe co-founder John Collison.

Engineers were told to onboard with the tools within a week. Armstrong arranged a Saturday meeting for those who had not complied and said that employees without valid reasons would be dismissed. Some were excused due to holidays, while others were let go.

Collison raised concerns about relying too heavily on AI-generated code, prompting Armstrong to agree. Past reports have described challenges with managing code produced by AI, even at companies like OpenAI. Coinbase did not comment on the matter.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI’s overuse of the em dash could be your biggest giveaway

AI-generated writing may be giving itself away, and the em dash is its most flamboyant tell. Long beloved by grammar nerds for its versatility, the em dash has become AI’s go-to flourish, but not everyone is impressed.

Pacing, pauses, and a suspicious number of em dashes are often a sign that a machine had its hand in the prose. Even simple requests for editing can leave users with sentences reworked into what feels like an AI-powered monologue.

Though tools like ChatGPT or Gemini can be powerful assistants, using them blindly can dull the human spark. Overuse of certain AI quirks, like rhetorical questions, generic phrases or overstyled punctuation, can make even an honest email feel like corporate poetry.

Writers are being advised to take the reins back. Draft the first version by hand, let the AI refine it, then strip out anything that feels artificial, especially the dashes. Keeping your natural voice intact may be the best way to make sure your readers are connecting with you, not just the machine behind the curtain.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Meta teams up with Midjourney for AI video and image tools

Meta has confirmed a new partnership with Midjourney to license its AI image and video generation technology. The collaboration, announced by Meta Chief AI Officer Alexandr Wang, will see Meta integrate Midjourney’s tools into upcoming models and products.

Midjourney will remain independent following the deal. CEO David Holz said the startup, which has never taken external investment, will continue operating on its own. The company launched its first video model earlier this year and has grown rapidly, reportedly reaching $200 million in revenue by 2023.

Midjourney is currently being sued by Disney and Universal for alleged copyright infringement in AI training data. Meta faces similar challenges, although courts have often sided with tech firms in recent decisions.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI could democratise higher education if implemented responsibly

Professor Orla Sheils of Trinity College Dublin calls on universities to embrace AI as a tool for educational equity rather than fear. She notes that AI is already ubiquitous in higher education, with students, lecturers, and researchers using it daily.

AI can help universities fulfil the democratic ideals of the Bologna Process and Ireland’s National AI Strategy by expanding lifelong learning, making education more accessible and supporting personalised student experiences.

Initiatives such as AI-driven tutoring, automated transcription and translation, streamlined timetabling and grading tools can free staff time while supporting learners with challenging schedules or disabilities.

Trinity’s AI Accountability Lab, led by Dr Abeba Birhane, exemplifies how institutions can blend innovation with ethics. Sheils warns that overreliance on AI risks academic integrity and privacy unless governed carefully. AI must serve educators, not replace them, preserving the human qualities of creativity and judgement in learning.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google Cloud’s new AI tools expand enterprise threat protection

Following last week’s announcements on AI-driven cybersecurity, Google Cloud has unveiled further tools at its Security Summit 2025 aimed at protecting enterprise AI deployments and boosting efficiency for security teams.

The updates build on prior innovations instead of replacing them, reinforcing Google’s strategy of integrating AI directly into security operations.

Vice President and General Manager Jon Ramsey highlighted the growing importance of agentic approaches as AI agents operate across increasingly complex enterprise environments.

Building on the previous rollout, Google now introduces Model Armor protections, designed to shield AI agents from prompt injections, jailbreaking, and data leakage, enhancing safeguards without interrupting existing workflows.

Additional enhancements include the Alert Investigation agent, which automates event enrichment and analysis while offering actionable recommendations.

By combining Mandiant threat intelligence feeds with Google’s Gemini AI, organisations can now detect and respond to incidents across distributed agent networks more rapidly and efficiently than before.

SecOps Labs and updated SOAR dashboards provide early access to AI-powered threat detection experiments and comprehensive visualisations of security operations.

These tools allow teams to continue scaling agentic AI security, turning previous insights into proactive, enterprise-ready protections for real-world deployments.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Musicians report surge in AI fakes appearing on Spotify and iTunes

Folk singer Emily Portman has become the latest artist targeted by fraudsters releasing AI-generated music in her name. Fans alerted her to a fake album called Orca appearing on Spotify and iTunes, which she said sounded uncannily like her style but was created without her consent.

Portman has filed copyright complaints, but says the platforms were slow to act, and she has yet to regain control of her Spotify profile. Other artists, including Josh Kaufman, Jeff Tweedy, Father John Misty, Sam Beam, Teddy Thompson, and Jakob Dylan, have faced similar cases in recent weeks.

Many of the fake releases appear to originate from the same source, using similar AI artwork and citing record labels with Indonesian names. The tracks are often credited to the same songwriter, Zyan Maliq Mahardika, whose name also appears on imitations of artists in other genres.

Industry analysts say streaming platforms and distributors are struggling to keep pace with AI-driven fraud. Tatiana Cirisano of Midia Research noted that fraudsters exploit passive listeners to generate streaming revenue, while services themselves are turning to AI and machine learning to detect impostors.

Observers warn the issue is likely to worsen before it improves, drawing comparisons to the early days of online piracy. Artists and rights holders may face further challenges as law enforcement attempts to catch up with the evolving abuse of AI.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Seemingly conscious AI may cause psychological problems and AI psychosis

Microsoft’s AI chief and DeepMind co-founder, Mustafa Suleyman, has warned that society is unprepared for AI systems that convincingly mimic human consciousness. He warned that ‘seemingly conscious’ AI could make the public treat machines as sentient.

Suleyman highlighted potential risks including demands for AI rights, welfare, and even AI citizenship. Since the launch of ChatGPT in 2022, AI developers have increasingly designed systems to act ‘more human’.

Experts caution that such technology could intensify mental health problems and distort perceptions of reality. The phenomenon known as AI Psychosis sees users forming intense emotional attachments or believing AI to be conscious or divine.

Suleyman called for clear boundaries in AI development, emphasising that these systems should be tools for people rather than digital persons. He urged careful management of human-AI interaction without calling for a halt to innovation.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Global tech competition intensifies as the UK outlines a £1 trillion digital blueprint

The United Kingdom has unveiled a strategy to grow its digital economy to £1 trillion by harnessing AI, quantum computing, and cybersecurity. The plan emphasises public-private partnerships, training, and international collaboration to tackle skills shortages and infrastructure gaps.

The initiative builds on the UK tech sector’s £1.2 trillion valuation, with regional hubs in cities such as Bristol and Manchester fuelling expansion in emerging technologies. Experts, however, warn that outdated systems and talent deficits could stall progress unless workforce development accelerates.

AI is central to the plan, with applications spanning healthcare and finance. Quantum computing also features, with investments in research and cybersecurity aimed at strengthening resilience against supply disruptions and future threats.

The government highlights sustainability as a priority, promoting renewable energy and circular economies to ensure digital growth aligns with environmental goals. Regional investment in blockchain, agri-tech, and micro-factories is expected to create jobs and diversify innovation-driven growth.

By pursuing these initiatives, the UK aims to establish itself as a leading global tech player alongside the US and China. Ethical frameworks and adaptive strategies will be key to maintaining public trust and competitiveness.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!