AI could harm the planet but also help save it

AI is often criticised for its growing electricity and water use, but experts argue it can also support sustainability. AI can reduce emissions, save energy, and optimise resource use across multiple sectors.

In agriculture, AI-powered irrigation helps farmers use water more efficiently. In Chile, precision systems reduced water consumption by up to 30%, while farmers earned extra income from verified savings.

Data centres and energy companies are deploying AI to improve efficiency, predict workloads, optimise cooling, monitor methane leaks, and schedule maintenance. These measures help reduce emissions and operational costs.

Buildings and aviation are also benefiting from AI. Innovative systems manage heating, cooling, and appliances more efficiently. AI also optimises flight routes, reducing fuel consumption and contrail formation, showing that wider adoption could help fight climate change.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Critical AI toy security failure exposes children’s data

The exposure of more than 50,000 children’s chat logs by AI toy company Bondu highlights serious gaps in child data protection. Sensitive personal information, including names, birth dates, and family details, was accessible through a poorly secured parental portal, raising immediate concerns about children’s privacy and safety.

The incident highlights the absence of mandatory security-by-design standards for AI products for children, with weak safeguards enabling unauthorised access and exposing vulnerable users to serious risks.

Beyond the specific flaw, the case raises wider concerns about AI toys used by children. Researchers warned that the exposed data could be misused, strengthening calls for stricter rules and closer oversight of AI systems designed for minors.

Concerns also extend to transparency around data handling and AI supply chains. Uncertainty over whether children’s data was shared with third-party AI model providers points to the need for clearer rules on data flows, accountability, and consent in AI ecosystems.

Finally, the incident has added momentum to policy discussions on restricting or pausing the sale of interactive AI toys. Lawmakers are increasingly considering precautionary measures while more robust child-focused AI safety frameworks are developed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Enforcement Directorate alleges AI bots rigged games on WinZO platform

The Enforcement Directorate (ED) has alleged in a prosecution complaint before a special court in Bengaluru that WinZO, an online real-money gaming platform with millions of users, manipulated outcomes in its games, contrary to public assurances of fairness and transparency.

WinZO deployed AI-powered bots, algorithmic player profiles and simulated gameplay data to control game outcomes. According to the ED complaint, WinZO hosted over 100 games on its mobile app and claimed a large user base, especially in smaller cities.

Its probe found that until late 2023, bots directly competed against real users, and from May 2024 to August 2025, the company used simulated profiles based on historical user data without disclosing this to players.

These practices were allegedly concealed within internal terminology such as ‘Engagement Play’ and ‘Past Performance of Player’.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

GDPR violation reports surge across Europe in 2025, study finds

European data protection authorities recorded a sharp rise in GDPR violation reports in 2025, according to a new study by law firm DLA Piper, signalling growing regulatory pressure across the European Union.

Average daily reports surpassed 400 for the first time since the regulation entered force in 2018, reaching 443 incidents per day, a 22% increase compared with the previous year. The firm noted that expanding digital systems, new breach reporting laws, and geopolitical cyber risks may be driving the surge.

Despite the higher number of cases in the EU, total fines remained broadly stable at around €1.2 billion for the year, pushing cumulative GDPR penalties since 2018 to €7.1 billion, underlining regulators’ continued willingness to impose major sanctions.

Ireland once again led enforcement figures, with fines imposed by its Data Protection Commission totaling €4.04 billion, reflecting the presence of major technology firms headquartered there, including Meta, Google, and Apple.

Recent headline penalties included a €1.2 billion fine against Meta and a €530 million sanction against TikTok over data transfers to China, while courts across Europe increasingly consider compensation claims linked to GDPR violations.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI investment gathers pace as Armenia seeks regional influence

Armenia is stepping up efforts to develop its AI sector, positioning itself as a potential regional hub for innovation. The government has announced plans to build a large-scale AI data centre backed by a $500 million investment, with operations expected to begin in 2026.

Officials say the project could support start-ups, research and education, while strengthening links between science and industry.

The initiative is being developed through a partnership involving the Armenian government, US chipmaker Nvidia, cloud company Firebird.ai and Team Group. The United States has already approved export licences for advanced chips, a move experts describe as strategically significant given global competition for semiconductor supply.

Armenian officials argue the project signals the country’s intention to participate actively in the global AI economy rather than remain on the sidelines.

Despite growing international attention, including recognition of Armenia’s technology leadership in global rankings, experts warn that the country lacks a clear and unified AI strategy. AI is already being used in areas such as agriculture mapping, tax risk analysis and social services, but deployment remains fragmented and transparency limited. Ongoing reforms and a shift towards cloud-based systems add further uncertainty.

Security specialists caution that without strong governance, expertise and long-term planning, AI investments could expose the public sector to cyber risks and poor decision-making. Armenia’s challenge, they argue, lies in moving quickly enough to seize emerging opportunities while ensuring that AI adoption strengthens, rather than undermines, institutional capacity and human judgement.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Conversational advertising arrives as OpenAI integrates sponsored content into ChatGPT

OpenAI has begun testing advertising placements inside ChatGPT, marking a shift toward monetising one of the world’s most widely used AI platforms. Sponsored content now appears below chatbot responses for free and low-cost users, integrating promotions directly into conversational queries.

Ads remain separate from organic answers, with OpenAI saying commercial content will not influence AI-generated responses. Users can see why specific ads appear, dismiss irrelevant placements, and disable personalisation. Advertising is excluded for younger users and sensitive topics.

Initial access is limited to enterprise partners, with broader availability expected later. Premium subscription tiers continue without ads, reflecting a freemium model similar to streaming platforms offering both paid and ad-supported options.

Pricing places ChatGPT ads among the most expensive digital formats. The value lies in reaching users at high-intent moments, such as during product research and purchase decisions. Measurement tools remain basic, tracking only impressions and clicks.

OpenAI’s move into advertising signals a broader shift as conversational AI reshapes how people discover information. Future performance data and targeting features will determine whether ChatGPT becomes a core ad channel or a premium niche format.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

China moves toward data centres in orbit

China is planning to develop large-scale space-based data centres over the next five years as part of a broader push to support AI development. The China Aerospace Science and Technology Corporation (CASC) has announced plans to build gigawatt-class digital infrastructure in orbit, according to Chinese state broadcaster CCTV.

Under CASC’s five-year development plan, the space data centres are expected to combine cloud, edge and terminal technologies, allowing computing power, data storage and communication capacity to operate as an integrated system. The aim is to create high-performance infrastructure capable of supporting advanced AI workloads beyond Earth.

The initiative follows a recent CASC policy proposal calling for solar-powered, gigawatt-scale space-based hubs to supply energy for AI processing. The proposal aligns with China’s upcoming 15th Five-Year Plan, which is set to place AI at the centre of national development priorities.

China has already taken early steps in this direction. In May 2025, Zhejiang Lab launched 12 low Earth orbit satellites to form the first phase of its ‘Three-Body Computing Constellation.’ The research institute plans to eventually deploy around 2,800 satellites, targeting a total computing power of 1,000 peta operations per second.

Interest in space-based data centres is growing globally. European aerospace firm Thales Alenia Space has been studying its feasibility since 2023, while companies such as SpaceX, Blue Origin, and several startups in the US and the UAE are exploring similar concepts at varying stages of development and ambition.

Supporters argue that space data centres could reduce environmental impacts on Earth, benefit from constant solar energy and simplify cooling. However, experts warn that operating in space brings its own challenges, including exposure to radiation, solar flares and space debris, as well as higher costs and greater difficulty when repairs are needed.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

EU confronts Grok abuse as Brussels tests its digital power

The European Commission has opened a formal investigation into Grok after the tool produced millions of sexualised images of women and children.

A scrutiny that centres on whether X failed to carry out adequate risk assessments before releasing the undressing feature in the European market. The case arrives as ministers, including Sweden’s deputy prime minister, publicly reveal being targeted by the technology.

Brussels is preparing to use its strongest digital laws instead of deferring to US pressure. The Digital Services Act allows the European Commission to fine major platforms or force compliance measures when systemic harms emerge.

Experts argue the Grok investigation represents an important test of European resolve, particularly as the bloc tries to show it can hold powerful companies to account.

Concerns remain about the willingness of the EU to act decisively. Reports suggest the opening of the probe was delayed because of a tariff dispute with Washington, raising questions about whether geopolitical considerations slowed the enforcement response.

Several lawmakers say the delay undermined confidence in the bloc’s commitment to protecting fundamental rights.

The investigation could last months and may have wider implications for content ranking systems already under scrutiny.

Critics say financial penalties may not be enough to change behaviour at X, yet the case is still viewed as a pivotal moment for European digital governance. Observers believe a firm outcome would demonstrate that emerging harms linked to synthetic media cannot be ignored.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Millions use Telegram to create AI deepfake nudes as digital abuse escalates

A global wave of deepfake abuse is spreading across Telegram as millions of users generate and share sexualised images of women without consent.

Researchers have identified at least 150 active channels offering AI-generated nudes of celebrities, influencers and ordinary women, often for payment. The widespread availability of advanced AI tools has turned intimate digital abuse into an industrialised activity.

Telegram states that deepfake pornography is banned and says moderators removed nearly one million violating posts in 2025. Yet new channels appear immediately after old ones are shut, enabling users to exchange tips on how to bypass safety controls.

The rise of nudification apps on major app stores, downloaded more than 700 million times, adds further momentum to an expanding ecosystem that encourages harassment rather than accountability.

Experts argue that the celebration of such content reflects entrenched misogyny instead of simple technological misuse. Women targeted by deepfakes face isolation, blackmail, family rejection and lost employment opportunities.

Legal protections remain minimal in much of the world, with fewer than 40% of countries having laws that address cyber-harassment or stalking.

Campaigners warn that women in low-income regions face the most significant risks due to poor digital literacy, limited resources and inadequate regulatory frameworks.

The damage inflicted on victims is often permanent, as deepfake images circulate indefinitely across platforms and are impossible to remove, undermining safety, dignity and long-term opportunities comprehensively.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Prism launches as OpenAI’s new workspace for scientific papers

OpenAI has launched Prism, a cloud-based LaTeX workspace designed to streamline the drafting, collaboration, and publication of academic papers. The tool integrates writing, citation management, real-time collaboration, and AI assistance into a single environment to reduce workflow friction.

Built specifically for scientific use, Prism embeds GPT-5.2 directly inside documents rather than as a separate chatbot. Researchers can rewrite sections, verify equations, test arguments, and clarify explanations without leaving the editing interface, positioning AI as a background collaborator.

Users can start new LaTeX projects or upload existing files through prism.openai.com using a ChatGPT account. Co-authors can join instantly, enabling simultaneous editing while maintaining structured formatting for equations, references, and manuscript layout.

OpenAI says Prism supports academic search, converts handwritten formulas into clean LaTeX, and allows voice-driven edits for faster reviews. Completed papers export as publication-ready PDFs alongside full source files.

Initially available for free to personal ChatGPT users, the workspace will later expand to Business, Enterprise, and Education plans. The company frames the tool as a practical productivity layer rather than a research disruption platform.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!