Google to replace Assistant with Gemini in smart home devices

Google has announced that Gemini will soon power its smart home platform, replacing Google Assistant on existing Nest speakers and displays from October. The feature will launch initially as an early preview.

Gemini for Home promises more natural conversations and can manage complex household tasks, including controlling smart devices, creating calendars, and handling lists or timers through natural language commands. It will also support Gemini Live for ongoing dialogue.

Google says the upgrade is designed to serve all household members and visitors, offering hands-free help and integration with streaming platforms. The move signals a renewed focus on Google Home, a product line that has been largely overlooked in recent years.

The announcement hints at potential new hardware, given that Google’s last Nest Hub was released in 2021 and the Nest Audio speaker dates back to 2020.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Meta freezes hiring as AI costs spark investor concern

Meta has frozen hiring in its AI division, halting a spree that had drawn top researchers with lucrative offers. The company described the pause as basic organisational planning, aimed at building a more stable structure for its superintelligence ambitions.

The freeze, first reported by the Wall Street Journal, began last week and prevents employees in the unit from transferring to other teams. Its duration has not been communicated, and Meta declined to comment on the number of hires already made.

The decision follows growing tensions inside the newly created Superintelligence Labs, where long-serving researchers have voiced concerns over disparities in pay and recognition compared with recruits.

Alexandr Wang, who leads the division, recently told staff that superintelligence is approaching and that significant changes are necessary to prepare. His email outlined Meta’s most significant reorganisation of its AI efforts.

The pause also comes amid investor scrutiny, as analysts warn that heavy reliance on stock-based compensation to attract talent could fuel innovation or dilute shareholder value without precise results.

Despite these concerns, Meta’s stock has risen by about 28% since the start of the year, reflecting continued investor confidence in the company’s long-term prospects.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Rethinking ‘soft skills’ as core drivers of transformation

Communication, empathy, and judgment were dismissed for years as ‘soft skills‘, sidelined while technical expertise dominated training and promotion. A new perspective argues that these human competencies are fundamental to resilience and transformation.

Researchers and practitioners emphasise that AI can expedite decision-making but cannot replace human judgment, trust, or narrative. Failures in leadership often stem from a lack of human capacity rather than technical gaps.

Redefining skills like decision-making, adaptability, and emotional intelligence as measurable behaviours helps organisations train and evaluate leaders effectively. Embedding these human disciplines ensures transformation holds under pressure and uncertainty.

Career and cultures are strengthened when leaders are assessed on their ability to build trust, resolve conflicts, and influence through storytelling. Without funding the human core alongside technical skills, strategies collapse, and talent disengages.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Google urges users to update Chrome after V8 flaw patched

Google has patched a high-severity flaw in its Chrome browser with the release of version 139, addressing vulnerability CVE-2025-9132 in the V8 JavaScript engine.

The out-of-bounds write issue was discovered by Big Sleep AI, a tool built by Google DeepMind and Project Zero to automate vulnerability detection in real-world software.

Chrome 139 updates (Windows/macOS: 139.0.7258.138/.139, Linux: 139.0.7258.138) are now rolling out to users. Google has not confirmed whether the flaw is being actively exploited.

Users are strongly advised to install the latest update to ensure protection, as V8 powers both JavaScript and WebAssembly within Chrome.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Study finds chain-of-thought reasoning in LLMs is a brittle mirage

A new study from Arizona State University researchers suggests that chain-of-thought reasoning in large language models (LLMs) is closer to pattern matching than accurate logical inference. The findings challenge assumptions about human-like intelligence in these systems.

The researchers used a data distribution lens to examine where chain-of-thought fails, testing models on new tasks, different reasoning lengths, and altered prompt formats. Across all cases, performance degraded sharply outside familiar training structures.

Their framework, DataAlchemy, showed that models replicate training patterns rather than reason abstractly. Failures could be patched quickly through fine-tuning on small new datasets, but this reinforced the pattern-matching theory.

The paper warns developers against relying on chain-of-thought reasoning for high-stakes domains, emphasising the risks of fluent but flawed rationale. It urges practitioners to implement rigorous out-of-distribution testing and treat fine-tuning as a limited patch.

The researchers argue that applications can remain effective for enterprise use by systematically mapping a model’s boundaries and aligning them with predictable tasks. Targeted fine-tuning then becomes a tool for precision rather than broad generalisation.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Quantum computing firm strengthens European presence

US quantum computing firm Strangeworks has expanded its European presence by acquiring German company Quantagonia. The merger allows organisations to tackle complex planning and optimisation using classical, hybrid, quantum, and quantum-inspired technologies.

Quantagonia, founded in 2021, develops AI-powered, quantum-ready planning tools that combine optimisation, AI, and natural language interfaces. The technology enables experts and non-technical users to solve problems across industries, including life sciences, finance, energy, and logistics.

The acquisition removes barriers to advanced decision-making and opens new go-to-market opportunities in previously underserved sectors.

The combined entity will merge Quantagonia’s solver engine and AI decision-making tools with Strangeworks’ AI and quantum infrastructure. The approach lets enterprises run multiple solvers in parallel and solve problems using natural language without technical expertise.

Strangeworks has strengthened its strategic European foothold, adding to its recent expansion in India and existing operations in the US and APAC. Executives said the merger boosts global growth and broadens access to sophisticated optimisation tools.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Google Pixel 10 could transform smartphones with advanced AI features

Google’s upcoming Pixel 10 smartphones are tipped to place AI at the centre of the user experience, with three new features expected to redefine how people use their devices.

While hardware upgrades are anticipated at the Made by Google event, much of the excitement revolves around the AI tools that may debut.

One feature, called Help Me Edit, is designed for Google Photos. Instead of spending time on manual edits, users could describe the change they want, such as altering the colour of a car, and the AI would adjust instantly.

Expanding on the Pixel 9’s generative tools, it promises far greater control and speed.

Another addition, Camera Coach, could offer real-time guidance on photography. Using Google’s Gemini AI, the phone may provide step-by-step advice on framing, lighting, and composition, acting as a digital photography tutor.

Finally, Pixel Sense is rumoured to be a proactive personal assistant that anticipates user needs. Learning patterns from apps such as Gmail and Calendar, it could deliver predictive suggestions and take actions across third-party services, bringing the smartphone closer to a truly adaptive companion.

These features suggest that Google is betting heavily on AI to give the Pixel 10 a competitive edge.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Zimbabwe to launch national AI policy by October to boost digital sovereignty

Zimbabwe’s Information and Communication Technology Minister, Tendai Mavetera, revealed the second draft of the National AI Policy during the AI Summit for Africa 2025 in Victoria Falls, hosted by Alpha Media Holdings and AIIA.

Though the policy was not formalised during the summit, Mavetera stated it is expected to be launched by 1 October 2025 at the new Parliament building, with presidential presence anticipated.

The strategy is designed to foster an Africa where AI serves humanity, ensuring connectivity in every village, education access for every child, and opportunity for every young person.

Core features include data sovereignty and secure data storage, with institutions like TelOne expected to host localised solutions, moving away from past practices of storing data abroad.

Speakers at the summit underscored AI’s role in economic and social transformation rather than job displacement; Africa’s investment in AI surpassed US$200 billion in 2024.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI in justice: Bridging the global access gap or deepening inequalities

At least 5 billion people worldwide lack access to justice, a human right enshrined in international law. In many regions, particularly low and middle-income countries, millions face barriers to justice, ranging from their socioeconomic position to the legal system failure. Meanwhile, AI has entered the legal sector at full speed and may offer legitimate solutions to bridge this justice gap.

Through chatbots, automated document review, predictive legal analysis, and AI-enabled translation, AI holds promise to improve efficiency and accessibility. Yet, the rise of AI in legal systems across the globe suggests the digitalisation of our legal systems.

While it may serve as a tool to break down access barriers, AI legal tools could also introduce the automation of bias in our judicial systems, unaccountable decision-making, and act as an accelerant to a widening digital divide. AI is capable of meaningfully expanding equitable justice, but its implementation must safeguard human rights principles. 

Improving access to justice

Across the globe, AI legal assistance pilot programmes are underway. The UNHCR piloted an AI agent to improve legal communication barriers in Jordan. AI transcribes, translates, and organises refugee queries. With its help, users can streamline their caseload management, which is key to keeping operations smooth even under financial strain

NGOs working to increase access to justice, such as Migrasia in Hong Kong, have begun using AI-powered chatbots to triage legal queries from migrant workers, offering 24/7 multilingual legal assistance.

While it is clear that these tools are designed to assist rather than replace human legal experts, they are showing they have the potential to significantly reduce delays by streamlining processes. In the UK, AI transcription tools are being used to provide victims of serious sexual crimes with access to judges’ sentencing remarks and explanations of legal language. This tool enhances transparency for victims, especially those seeking emotional closure. 

Even as these programmes are only being piloted, a UNESCO survey found that 44% of judicial workers across 96 countries are currently using AI tools, like ChatGPT, for tasks such as drafting and translating documents. For example, the Morrocan judiciary has already integrated AI technology into its legal system.

AI tools help judges prepare judgments for various cases, as well as streamline legal document preparation. The technology allows for faster document drafting in a multilingual environment. Soon, AI-powered case analysis, based on prior case data, may also provide legal experts with predictive outcomes. AI tools have the opportunity and are already beginning to, break down barriers to justice and ultimately improve the just application of the law. 

Risking human rights

While AI-powered legal assistance can provide affordable access, improve outreach to rural or marginalised communities, close linguistic divides, and streamline cases, it also poses a serious risk to human rights. The most prominent concerns surround bias and discrimination, as well as widening the digital divide.

Deploying AI without transparency can lead to algorithmic systems perpetuating systematic inequalities, such as racial or ethnic biases. Meanwhile, the risk of black box decision-making, through the use of AI tools with unexplainable outputs, can make it difficult to challenge legal decisions, undermining due process and the right to a fair trial.

Experts emphasise that the integration of AI into legal systems must focus on supporting human judgment, rather than outright replacing it. Whether AI is biased by its training datasets or simply that it becomes a black box over time, AI usage is in need of foresighted governance and meaningful human oversight. 

 Sphere, Adult, Female, Person, Woman, Astronomy, Outer Space, Planet, Globe, Head
Image via Pixabay / jessica45

Additionally, AI will greatly impact economic justice, especially for those in low-income or marginalised communities. Legal professionals lack necessary training and skills needed to effectively use AI tools. In many legal systems, lawyers, judges, clerks, and assistants do not feel confident explaining AI outputs or monitoring their use.

However, this lack of education undermines the necessary accountability and transparency needed to integrate AI meaningfully. It may lead to misuse of the technology, such as unverified translations, which can lead to legal errors. 

While the use of AI improves efficiency, it may erode public trust when legal actors fail to use it correctly or the technology reflects systematic bias. The judiciary in Texas, US, warned about this concern in an opinion that detailed the fear of integrating opaque systems into the administration of justice. Public trust in the legal system is already eroding in the US, with just over a third of Americans expressing confidence in 2024.

The incorporation of AI into the legal system threatens to derail the public’s faith that is left. Meanwhile, those without access to digital connectivity or literacy education may be further excluded from justice. Many AI tools are developed by for-profit actors, raising questions about justice accessibility in an AI-powered legal system. Furthermore, AI providers will have access to sensitive case data, which poses a risk of misuse and even surveillance. 

The policy path forward

As already stated, for AI to be integrated into legal systems and help bridge the justice gap, it must take on the role of assisting to human judges, lawyers, and other legal actors, but it cannot replace them. In order for AI to assist, it must be transparent, accountable, and a supplement to human reason. UNESCO and some regional courts in Eastern Africa advocate for judicial training programmes, thorough guidelines, and toolkits that promote the ethical use of AI.

The focus of legal AI education must be to improve AI literacy and to teach bias awareness, as well as inform users of digital rights. Legal actors must keep pace with the innovation and integration level of AI. They are the core of policy discussions, as they understand existing norms and have firsthand experience of how the technology affects human rights. 

Other actors are also at play in this discussion. Taking a multistakeholder approach that centres on existing human rights frameworks, such as the Toronto Declaration, is the path to achieving effective and workable policy. Closing the justice gap by utilising AI hinges on the public’s access to the technology and understanding how it is being used in their legal systems. Solutions working to demystify black box decisions will be key to maintaining and improving public confidence in their legal systems. 

The future of justice

AI has the transformative capability to help bridge the justice gap by expanding reach, streamlining operations, and reducing cost. AI has the potential to be a tool for the application of justice and create powerful improvements to inclusion in our legal systems.

However, it also poses the risk of deepening inequalities and decaying public trust. AI integration must be governed by human rights norms of transparency and accountability. Regulation is possible through education and discussion predicated on adherence to ethical frameworks. Now is the time to invest in digital literacy to create legal empowerment, which ensures that AI tools are developed to be contestable and serve as human-centric support. 

AI, justice, law
Image via Pixabay / souandresantana

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot




Zoom launches AI Virtual Agent to replace human receptionists

Zoom has unveiled its Virtual Agent for Zoom Phone, a 24/7 AI concierge designed to replace or support human receptionists. The tool can greet callers naturally, process requests, and initiate next steps without human intervention, aiming to reduce missed calls and waiting times.

The AI agent is initially available in English, Spanish, French, German, Portuguese, and Japanese, with more languages planned.

Companies can set up the system without coding expertise by training it with existing documents or company websites, allowing for a faster, personalised, and scalable customer experience.

Zoom highlighted use cases across sectors, including booking appointments in healthcare, checking stock and answering retail product queries, and providing financial service updates. The Virtual Agent promises to handle these tasks autonomously, giving businesses greater efficiency and flexibility.

In addition, Zoom has enhanced its AI Companion tool to manage meeting scheduling. The agent can coordinate invites, track responses, and suggest alternatives, freeing teams to focus on discussions rather than logistics.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!