According to sworn interrogations, OpenAI said Musk had discussed possible financing arrangements with Zuckerberg as part of the bid. Musk’s AI startup xAI, a competitor to OpenAI, did not respond to requests for comment.
In the filing, OpenAI asked a federal judge to order Meta to provide documents related to any bid for OpenAI, including internal communications about restructuring or recapitalisation. The firm argued these records could clarify motivations behind the bid.
Meta countered that such documents were irrelevant and suggested OpenAI seek them directly from Musk or xAI. A US judge ruled that Musk must face OpenAI’s claims of attempting to harm the company through public remarks and what it described as a sham takeover attempt.
The legal dispute follows Musk’s lawsuit against OpenAI and Sam Altman over its for-profit transition, with OpenAI filing a countersuit in April. A jury trial is scheduled for spring 2026.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI-enabled cameras in Devon and Cornwall have detected 6,000 people failing to wear seat belts over the past year. The number caught was 50 percent higher than those penalised for using mobile phones while driving, police confirmed.
Road safety experts warn that the long-standing culture of belting up may be fading among newer generations of drivers. Geoff Collins of Acusensus noted a rise in non-compliance and said stronger legal penalties could help reverse the trend.
Current UK law imposes a £100 fine for not wearing a seat belt, with no points added to a driver’s licence. Campaigners now urge the government to make such offences endorsable, potentially adding penalty points and risking licence loss.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new study in Robot Learning has introduced a robotic system that combines machine learning with decision-making to analyse water samples. The approach enables robots to detect, classify, and distinguish drinking water on Earth and potentially other planets.
Researchers used a hybrid method that merged the TOPSIS decision-making technique with a Random Forest Classifier trained on the Water Quality and Potability Dataset from Kaggle. By applying data balancing techniques, classification accuracy rose from 69% to 73%.
The robotic prototype includes thrusters, motors, solar power, sensors, and a robotic arm for sample collection. Water samples are tested in real time, with the onboard model classifying them as drinkable.
The system has the potential for rapid crisis response, sustainable water management, and planetary exploration, although challenges remain regarding sensor accuracy, data noise, and scalability. Researchers emphasise that further testing is necessary before real-world deployment.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new MIT study has found that 95% of corporate AI projects fail to deliver returns, mainly due to difficulties integrating them with existing workflows.
The report, ‘The GenAI Divide: State of AI in Business 2025’, examined 300 deployments and interviewed 350 employees. Only 5% of projects generated value, typically when focused on solving a single, clearly defined problem.
Executives often blamed model performance, but researchers pointed to a workforce ‘learning gap’ as the bigger barrier. Many projects faltered because staff were unprepared to adapt processes effectively.
More than half of GenAI budgets were allocated to sales and marketing, yet the most substantial returns came from automating back-office tasks, such as reducing agency costs and streamlining roles.
The study also found that tools purchased from specialised vendors were nearly twice as successful as in-house systems, with success rates of 67% compared to 33%.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google has announced that Gemini will soon power its smart home platform, replacing Google Assistant on existing Nest speakers and displays from October. The feature will launch initially as an early preview.
Gemini for Home promises more natural conversations and can manage complex household tasks, including controlling smart devices, creating calendars, and handling lists or timers through natural language commands. It will also support Gemini Live for ongoing dialogue.
Google says the upgrade is designed to serve all household members and visitors, offering hands-free help and integration with streaming platforms. The move signals a renewed focus on Google Home, a product line that has been largely overlooked in recent years.
The announcement hints at potential new hardware, given that Google’s last Nest Hub was released in 2021 and the Nest Audio speaker dates back to 2020.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Communication, empathy, and judgment were dismissed for years as ‘soft skills‘, sidelined while technical expertise dominated training and promotion. A new perspective argues that these human competencies are fundamental to resilience and transformation.
Researchers and practitioners emphasise that AI can expedite decision-making but cannot replace human judgment, trust, or narrative. Failures in leadership often stem from a lack of human capacity rather than technical gaps.
Redefining skills like decision-making, adaptability, and emotional intelligence as measurable behaviours helps organisations train and evaluate leaders effectively. Embedding these human disciplines ensures transformation holds under pressure and uncertainty.
Career and cultures are strengthened when leaders are assessed on their ability to build trust, resolve conflicts, and influence through storytelling. Without funding the human core alongside technical skills, strategies collapse, and talent disengages.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Chief of Microsoft AI, Mustafa Suleyman, has urged AI firms to stop suggesting their models are conscious, warning of growing risks from unhealthy human attachments to AI systems.
In a blog post, he described the phenomenon as Seemingly Conscious AI, where models mimic human responses convincingly enough to give users the illusion of feeling and thought. He cautioned that this could fuel AI rights, welfare, or citizenship advocacy.
Suleyman stressed that such beliefs could emerge even among people without prior mental health issues. He called on the industry to develop guardrails that prevent or counter perceptions of AI consciousness.
AI companions, a fast-growing product category, were highlighted as requiring urgent safeguards. Microsoft AI chief’s comments follow recent controversies, including OpenAI’s decision to temporarily deprecate GPT-4o, which drew protests from users emotionally attached to the model.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.
The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.
Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.
Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Google Translate may soon evolve into a full-featured language learning tool, introducing AI-powered lessons rivalling apps like Duolingo.
The latest Translate app release recently uncovered a hidden feature called Practice. It enables users to take part in interactive learning scenarios.
Early tests allow learners to choose languages such as Spanish and French, then engage with situational exercises from beginner to advanced levels.
The tool personalises lessons using AI, adapting difficulty and content based on a user’s goals, such as preparing for specific trips.
Users can track progress, receive daily practice reminders, and customise prompts for listening and speaking drills through a dedicated settings panel.
The feature resembles gamified learning apps and may join Google’s premium AI offerings, though pricing and launch plans remain unconfirmed.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
A new study from Arizona State University researchers suggests that chain-of-thought reasoning in large language models (LLMs) is closer to pattern matching than accurate logical inference. The findings challenge assumptions about human-like intelligence in these systems.
The researchers used a data distribution lens to examine where chain-of-thought fails, testing models on new tasks, different reasoning lengths, and altered prompt formats. Across all cases, performance degraded sharply outside familiar training structures.
Their framework, DataAlchemy, showed that models replicate training patterns rather than reason abstractly. Failures could be patched quickly through fine-tuning on small new datasets, but this reinforced the pattern-matching theory.
The paper warns developers against relying on chain-of-thought reasoning for high-stakes domains, emphasising the risks of fluent but flawed rationale. It urges practitioners to implement rigorous out-of-distribution testing and treat fine-tuning as a limited patch.
The researchers argue that applications can remain effective for enterprise use by systematically mapping a model’s boundaries and aligning them with predictable tasks. Targeted fine-tuning then becomes a tool for precision rather than broad generalisation.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!