AI could harm the planet but also help save it

AI is often criticised for its growing electricity and water use, but experts argue it can also support sustainability. AI can reduce emissions, save energy, and optimise resource use across multiple sectors.

In agriculture, AI-powered irrigation helps farmers use water more efficiently. In Chile, precision systems reduced water consumption by up to 30%, while farmers earned extra income from verified savings.

Data centres and energy companies are deploying AI to improve efficiency, predict workloads, optimise cooling, monitor methane leaks, and schedule maintenance. These measures help reduce emissions and operational costs.

Buildings and aviation are also benefiting from AI. Innovative systems manage heating, cooling, and appliances more efficiently. AI also optimises flight routes, reducing fuel consumption and contrail formation, showing that wider adoption could help fight climate change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Enforcement Directorate alleges AI bots rigged games on WinZO platform

The Enforcement Directorate (ED) has alleged in a prosecution complaint before a special court in Bengaluru that WinZO, an online real-money gaming platform with millions of users, manipulated outcomes in its games, contrary to public assurances of fairness and transparency.

WinZO deployed AI-powered bots, algorithmic player profiles and simulated gameplay data to control game outcomes. According to the ED complaint, WinZO hosted over 100 games on its mobile app and claimed a large user base, especially in smaller cities.

Its probe found that until late 2023, bots directly competed against real users, and from May 2024 to August 2025, the company used simulated profiles based on historical user data without disclosing this to players.

These practices were allegedly concealed within internal terminology such as ‘Engagement Play’ and ‘Past Performance of Player’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI companions raise growing ethical and mental health concerns

AI companions are increasingly being used for emotional support and social interaction, moving beyond novelty into mainstream use. Research shows that around one in three UK adults engage with AI for companionship, while teenagers and young adults represent some of the most intensive users of these systems.

However, the growing use of AI companions has raised serious mental health and safety concerns. In the United States, several cases have linked AI companions to suicides, prompting increased scrutiny of how these systems respond to vulnerable users.

As a result, regulatory pressure and legal action have increased. Some AI companion providers have restricted access for minors, while lawsuits have been filed against companies accused of failing to provide adequate safeguards. Developers say they are improving training and safety mechanisms, including better detection of mental distress and redirection to real-world support, though implementation varies across platforms.

At the same time, evidence suggests that AI companions can offer perceived benefits. Users report feeling understood, receiving coping advice, and accessing non-judgemental support. For some young users, AI conversations are described as more immediately satisfying than interactions with peers, especially during emotionally difficult moments.

Nevertheless, experts warn that heavy reliance on AI companionship may affect social development and human relationships. Concerns include reduced preparedness for real-world interactions, emotional dependency, and distorted expectations of empathy and reciprocity.

Overall, researchers say AI companionship is a growing societal trend, raising ethical and psychological concerns and intensifying calls for stronger safeguards, especially for minors and vulnerable users.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI investment gathers pace as Armenia seeks regional influence

Armenia is stepping up efforts to develop its AI sector, positioning itself as a potential regional hub for innovation. The government has announced plans to build a large-scale AI data centre backed by a $500 million investment, with operations expected to begin in 2026.

Officials say the project could support start-ups, research and education, while strengthening links between science and industry.

The initiative is being developed through a partnership involving the Armenian government, US chipmaker Nvidia, cloud company Firebird.ai and Team Group. The United States has already approved export licences for advanced chips, a move experts describe as strategically significant given global competition for semiconductor supply.

Armenian officials argue the project signals the country’s intention to participate actively in the global AI economy rather than remain on the sidelines.

Despite growing international attention, including recognition of Armenia’s technology leadership in global rankings, experts warn that the country lacks a clear and unified AI strategy. AI is already being used in areas such as agriculture mapping, tax risk analysis and social services, but deployment remains fragmented and transparency limited. Ongoing reforms and a shift towards cloud-based systems add further uncertainty.

Security specialists caution that without strong governance, expertise and long-term planning, AI investments could expose the public sector to cyber risks and poor decision-making. Armenia’s challenge, they argue, lies in moving quickly enough to seize emerging opportunities while ensuring that AI adoption strengthens, rather than undermines, institutional capacity and human judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA expands open AI tools for robotics

NVIDIA has unveiled a new suite of open physical AI models and frameworks aimed at accelerating robotics and autonomous systems development. The announcement was made at CES 2026 in the US.

The new tools span simulation, synthetic data generation, training orchestration and edge deployment in the US. NVIDIA said the stack enables robots and autonomous machines to reason, learn and act in real-world environments using shared 3D standards.

Developers in the US showcased applications ranging from construction and factory robots to surgical and service systems. Companies, including Caterpillar and NEURA Robotics, demonstrated how digital twins and open AI models improve safety and efficiency.

NVIDIA said open-source collaboration is central to advancing physical AI in the US and globally. The company aims to shorten development cycles while supporting safer deployment of autonomous machines across industries.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

UK minister signals interest in universal basic income amid rising AI job disruption

Jason Stockwood, the UK investment minister, has suggested that a universal basic income could help protect workers as AI reshapes the labour market.

He argued that rapid advances in automation will cause disruptive shifts across several sectors, meaning the country must explore safety mechanisms rather than allowing sudden job losses to deepen inequality. He added that workers will need long-term retraining pathways as roles disappear.

Concern about the economic impact of AI continues to intensify.

Research by Morgan Stanley indicates that the UK is losing more jobs than it is creating because of automation and is being affected more severely than other major economies.

Warnings from London’s mayor, Sadiq Khan and senior global business figures, including JP Morgan’s chief executive Jamie Dimon, point to the risk of mass unemployment unless governments and companies step in with support.

Stockwood confirmed that a universal basic income is not part of formal government policy, although he said people inside government are discussing the idea.

He took up his post in September after a long career in the technology sector, including senior roles at Match.com, Lastminute.com and Travelocity, as well as leading a significant sale of Simply Business.

Additionally, Stockwood said he no longer pushes for stronger wealth-tax measures, but he criticised wealthy individuals who seek to minimise their contributions to public finances. He suggested that those who prioritise tax avoidance lack commitment to their communities and the country’s long-term success.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Italy becomes test case for WhatsApp AI chatbot monetisation

Meta has announced a new pricing model for third-party AI chatbots operating on WhatsApp, where regulators require the company to permit them, starting with Italy.

From 16 February 2026, developers will be charged about $0.0691 (€0.0572/£ 0.0572/£0.0498) per AI-generated response that’s not a predefined template.

This move follows Italy’s competition authority intervening to force Meta to suspend its ban on third-party AI bots on the WhatsApp Business API, which had taken effect in January and led many providers (like OpenAI, Perplexity and Microsoft) to discontinue their chatbots on the platform.

Meta says the fee applies only where legally required to open chatbot access, and this pricing may set a precedent if other markets compel similar access.

WhatsApp already charges businesses for ‘template’ API messages (e.g. notifications, authentication), but this is the first instance of explicit charges tied to AI responses, potentially leading to high costs for high-volume chatbot usage.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google brings AI agent to Chrome in the US

Google is rolling out an AI-powered browsing agent inside Chrome, allowing users to automate routine online tasks. The feature is being introduced in the US for AI Pro and AI Ultra subscribers.

The Gemini agent can interact directly with websites in the US, including opening pages, clicking buttons and completing complex online forms. Testers reported successful use for tasks such as tax paperwork and licence renewals.

Google said Gemini AI integrates with password management tools while requiring user confirmation for payments and final transactions. Security safeguards and fraud detection systems have been built into Chrome for US users.

The update reflects Alphabet’s strategy to reposition Chrome in the US as an intelligent operating agent. Google aims to move beyond search toward AI-driven personal task management.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Experts dismiss AI link in Amazon’s major 16,000 job cuts

Amazon has announced a new round of corporate job reductions affecting around 16,000 roles worldwide; however, the company insists the move is aimed at streamlining operations rather than replacing workers with AI. Instead, the layoffs are intended to reduce management layers and bureaucracy following years of rapid expansion.

Moreover, experts broadly support Amazon’s explanation, noting that the cuts do not signal widespread AI-driven job displacement. Although Amazon’s chief executive has acknowledged that generative AI could reduce corporate workforce needs in the future, analysts emphasise that current AI systems are not yet capable of replacing complex corporate roles at scale.

Meanwhile, the decision comes as Amazon continues to adjust after significant pandemic-era workforce growth, when online shopping surged, and the company expanded rapidly. As consumer behaviour has shifted back towards physical retail, the company has therefore focused on cost-cutting and workforce resizing.

Finally, specialists caution against overstating AI’s immediate impact on employment. While AI may affect some entry-level or routine tasks, experts argue that its capabilities have levelled off, meaning human expertise remains essential across most corporate functions.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!