Tencent Cloud sites exposed credentials and source code in major security lapse

Researchers have uncovered severe misconfigurations in two Tencent Cloud sites that exposed sensitive credentials and internal source code to the public. The flaws could have given attackers access to Tencent’s backend infrastructure and critical internal services.

Cybernews discovered the data leaks in July 2025, finding hardcoded plain-text passwords, a sensitive internal .git directory, and configuration files linked to Tencent’s load balancer and JEECG development platform.

Weak passwords, built from predictable patterns like the company name and year, increased the risk of exploitation.

The exposed data may have been accessible since April, leaving months of opportunity for scraping bots or malicious actors.

With administrative console access, attackers could have tampered with APIs, planted malicious code, pivoted deeper into Tencent’s systems, or abused the trusted domain for phishing campaigns.

Tencent confirmed the incident as a ‘known issue’ and has since closed access, though questions remain over how many parties may have already retrieved the exposed information.

Security experts warn that even minor oversights in cloud operations can cascade into serious vulnerabilities, especially for platforms trusted by millions worldwide.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT faces scrutiny as OpenAI updates protections after teen suicide case

OpenAI has announced new safety measures for its popular chatbot following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after relying on ChatGPT for guidance.

The parents allege the chatbot isolated their son and contributed to his death earlier in the year.

The company said it will improve ChatGPT’s ability to detect signs of mental distress, including indirect expressions such as users mentioning sleep deprivation or feelings of invincibility.

It will also strengthen safeguards around suicide-related conversations, which OpenAI admitted can break down in prolonged chats. Planned updates include parental controls, access to usage details, and clickable links to local emergency services.

OpenAI stressed that its safeguards work best during short interactions, acknowledging weaknesses in longer exchanges. It also said it is considering building a network of licensed professionals that users could access through ChatGPT.

The company added that content filtering errors, where serious risks are underestimated, will also be addressed.

The lawsuit comes amid wider scrutiny of AI tools by regulators and mental health experts. Attorneys general from more than 40 US states recently warned AI companies of their duty to protect children from harmful or inappropriate chatbot interactions.

Critics argue that reliance on chatbots for support instead of professional care poses growing risks as usage expands globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Green AI and the battle between progress and sustainability

AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scale AI models require vast computational resources, significant amounts of electricity, and extensive cooling infrastructure.

For instance, studies have shown that training a single large language model can consume as much electricity as several hundred households use in a year, while data centres operated by companies like Google and Microsoft require millions of litres of water annually to keep servers cool.

That has sparked an emerging debate around what is now often called ‘Green AI’, the effort to balance technological progress with sustainability concerns. On one side, critics warn that the rapid expansion of AI comes at a steep ecological cost, from high carbon emissions to intensive water and energy consumption.

On the other hand, proponents argue that AI can be a powerful tool for achieving sustainability goals, helping optimise energy use, supporting climate research, and enabling greener industrial practices. The tension between sustainability and progress is becoming central to discussions on digital policy, raising key questions.

Should governments and companies prioritise environmental responsibility, even if it slows down innovation? Or should innovation come first, with sustainability challenges addressed through technological solutions as they emerge?

Sustainability challenges

In the following paragraphs, we present the main sustainability challenges associated with the rapid expansion of AI technologies.

Energy consumption

The training of large-scale AI models requires massive computational power. Estimates suggest that developing state-of-the-art language models can demand thousands of GPUs running continuously for weeks or even months.

According to a 2019 study from the University of Massachusetts Amherst, training a single natural language processing model consumed roughly 284 tons of CO₂, equivalent to the lifetime emissions of five cars. As AI systems grow larger, their energy appetite only increases, raising concerns about the long-term sustainability of this trajectory.

Carbon emissions

Carbon emissions are closely tied to energy use. Unless powered by renewable sources, data centres rely heavily on electricity grids dominated by fossil fuels. Research indicates that the carbon footprint of training advanced models like GPT-3 and beyond is several orders of magnitude higher than that of earlier generations. That research highlights the environmental trade-offs of pursuing ever more powerful AI systems in a world struggling to meet climate targets.

Water usage and cooling needs

Beyond electricity, AI infrastructure consumes vast amounts of water for cooling. For example, Google reported that in 2021 its data centre in The Dalles, Oregon, used over 1.2 billion litres of water to keep servers cool. Similarly, Microsoft faced criticism in Arizona for operating data centres in drought-prone areas while local communities dealt with water restrictions. Such cases highlight the growing tension between AI infrastructure needs and local environmental realities.

Resource extraction and hardware demands

The production of AI hardware also has ecological costs. High-performance chips and GPUs depend on rare earth minerals and other raw materials, the extraction of which often involves environmentally damaging mining practices. That adds a hidden, but significant footprint to AI development, extending beyond data centres to global supply chains.

Inequality in resource distribution

Finally, the environmental footprint of AI amplifies global inequalities. Wealthier countries and major corporations can afford the infrastructure and energy needed to sustain AI research, while developing countries face barriers to entry.

At the same time, the environmental consequences, whether in the form of emissions or resource shortages, are shared globally. That creates a digital divide where the benefits of AI are unevenly distributed, while the costs are widely externalised.

Progress & solutions

While AI consumes vast amounts of energy, it is also being deployed to reduce energy use in other domains. Google’s DeepMind, for example, developed an AI system that optimised cooling in its data centres, cutting energy consumption for cooling by up to 40%. Similarly, IBM has used AI to optimise building energy management, reducing operational costs and emissions. These cases show how the same technology that drives consumption can also be leveraged to reduce it.

AI has also become crucial in climate modelling, weather prediction, and renewable energy management. For example, Microsoft’s AI for Earth program supports projects worldwide that use AI to address biodiversity loss, climate resilience, and water scarcity.

Artificial intelligence also plays a role in integrating renewable energy into smart grids, such as in Denmark, where AI systems balance fluctuations in wind power supply with real-time demand.

There is growing momentum toward making AI itself more sustainable. OpenAI and other research groups have increasingly focused on techniques like model distillation (compressing large models into smaller versions) and low-rank adaptation (LoRA) methods, which allow for fine-tuning large models without retraining the entire system.

Winston AI Sustainability 1290x860 1

Meanwhile, startups like Hugging Face promote open-source, lightweight models (like DistilBERT) that drastically cut training and inference costs while remaining highly effective.

Hardware manufacturers are also moving toward greener solutions. NVIDIA and Intel are working on chips with lower energy requirements per computation. On the infrastructure side, major providers are pledging ambitious climate goals.

Microsoft has committed to becoming carbon negative by 2030, while Google aims to operate on 24/7 carbon-free energy by 2030. Amazon Web Services is also investing heavily in renewable-powered data centres to offset the footprint of its rapidly growing cloud services.

Governments and international organisations are beginning to address the sustainability dimension of AI. The European Union’s AI Act introduces transparency and reporting requirements that could extend to environmental considerations in the future.

In addition, initiatives such as the OECD’s AI Principles highlight sustainability as a core value for responsible AI. Beyond regulation, some governments fund research into ‘green AI’ practices, including Canada’s support for climate-oriented AI startups and the European Commission’s Horizon Europe program, which allocates resources to environmentally conscious AI projects.

Balancing the two sides

The debate around Green AI ultimately comes down to finding the right balance between environmental responsibility and technological progress. On one side, the race to build ever larger and more powerful models has accelerated innovation, driving breakthroughs in natural language processing, robotics, and healthcare. In contrast, the ‘bigger is better’ approach comes with significant sustainability costs that are increasingly difficult to ignore.

Some argue that scaling up is essential for global competitiveness. If one region imposes strict environmental constraints on AI development, while another prioritises innovation at any cost, the former risks falling behind in technological leadership. The following dilemma raises a geopolitical question that sustainability standards may be desirable, but they must also account for the competitive dynamics of global AI development.

Malaysia aims to lead Asia’s clean tech revolution through rare earth processing and circular economy efforts.

At the same time, advocates of smaller and more efficient models suggest that technological progress does not necessarily require exponential growth in size and energy demand. Innovations in model efficiency, greener hardware, and renewable-powered infrastructure demonstrate that sustainability and progress are not mutually exclusive.

Instead, they can be pursued in tandem if the right incentives, investments, and policies are in place. That type of development leaves governments, companies, and researchers facing a complex but urgent question. Should the future of AI prioritise scale and speed, or should it embrace efficiency and sustainability as guiding principles?

Conclusion

The discussion on Green AI highlights one of the central dilemmas of our digital age. How to pursue technological progress without undermining environmental sustainability. On the one hand, the growth of large-scale AI systems brings undeniable costs in terms of energy, water, and resource consumption. At the same time, the very same technology holds the potential to accelerate solutions to global challenges, from optimising renewable energy to advancing climate research.

Rather than framing sustainability and innovation as opposing forces, the debate increasingly suggests the need for integration. Policies, corporate strategies, and research initiatives will play a decisive role in shaping this balance. Whether through regulations that encourage transparency, investments in renewable infrastructure, or innovations in model efficiency, the path forward will depend on aligning technological ambition with ecological responsibility.

In the end, the future of AI may not rest on choosing between sustainability and progress, but on finding ways to ensure that progress itself becomes sustainable.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack disrupts Nevada government systems

The State of Nevada reported a cyberattack affecting several state government systems, with recovery efforts underway. Some websites and phone lines may be slow or offline while officials restore operations.

Governor Joe Lombardo’s office stated there is no evidence that personal information has been compromised, emphasising that the issue is limited to state systems. The incident is under investigation by both state and federal authorities, although technical details have not been released.

Several agencies, including the Department of Motor Vehicles, have been affected, prompting temporary office closures until normal operations can resume. Emergency services, including 911, continue to operate without disruption.

Officials prioritise system validation and safe restoration to prevent further disruption to state services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece strengthens crypto rules to align with EU standards

Greek authorities are enforcing stricter regulations on the crypto sector to strengthen oversight and align with European standards. The move targets money laundering and tax evasion, reflecting Athens’ intent to bring order to the industry.

Digital asset exchanges and wallet providers will face a rigorous licensing process. Applicants must submit a complete business dossier, disclose management and shareholder details, and pass extensive checks before being allowed to operate.

Non-compliant platforms risk being barred from the market.

Financial regulators will monitor crypto transactions closely, with powers to freeze suspicious digital assets and trace funds. Authorities aim to prevent illegal capital flows while boosting investor confidence through enhanced transparency.

Taxation rules for crypto are expected this fall, with capital gains taxes set at 15% for private investors and potentially higher for companies. Some crypto services may also be subject to 24% VAT, with final rates announced in the coming months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Insecure code blamed for 74 percent of company breaches

Nearly three-quarters of companies have experienced a security breach in the past year due to flaws in their software code.

According to a new SecureFlag study, 74% of organisations admitted to at least one incident caused by insecure code, with almost half suffering multiple breaches.

The report has renewed scrutiny of AI-generated code, which is growing in popularity across the industry. While some experts claim AI can outperform humans, concerns remain that these tools are reproducing insecure coding patterns at scale.

On the upside, companies are increasing developer security training. Around 44% provide quarterly updates, while 29% do so monthly.

Most use video tutorials and eLearning platforms, with a third hosting interactive events like capture-the-flag hacking games.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

US pushes chip manufacturing to boost AI dominance

Donald Trump’s AI Action Plan, released in July 2025, places domestic semiconductor manufacturing at the heart of US efforts to dominate global AI. The plan supports deregulation, domestic production and export of full-stack technology, positioning chips as critical to national power.

Lawmakers and tech leaders have previously flagged tracking chips post-sale as viable, with companies like Google already using such methods. Trump’s plan suggests adopting location tracking and enhanced end-use monitoring to ensure chips avoid blacklisted destinations.

Trump has pressed for more private sector investment in US fabs, reportedly using tariff threats to extract pledges from chipmakers like TSMC. The cost of building and running chip plants in the US remains significantly higher than in Asia, raising questions about sustainability.

America’s success in AI and semiconductors will likely depend on how well it balances domestic goals with global collaboration. Overregulation risks slowing innovation, while unilateral restrictions may alienate allies and reduce long-term influence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Netflix limits AI use in productions with new rules

Netflix has issued detailed guidance for production companies on the approved use of generative AI. The guidelines allow AI tools for early ideation tasks such as moodboards or reference images, but stricter oversight applies beyond that stage.

The company outlined five guiding principles. These include ensuring generated content does not replicate copyrighted works, maintaining security of inputs, avoiding use of AI in final deliverables, and prohibiting storage or reuse of production data by AI tools.

Enterprises or vendors working on Netflix content must pass the platform’s AI compliance checks at every stage.

Netflix has already used AI to reduce VFX costs on projects like The Eternaut, but has moved to formalise boundaries around how and when the technology is applied.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI model forecasts Bitcoin to fall below $100,000

Bitcoin has slipped below $110,000, and according to Finbold’s use of ChatGPT-5, a further drop could occur in the coming weeks. The model outlined technical resistance and seasonal factors pointing to September weakness.

Key levels around $112,000 and $106,000 are under pressure, with the AI projecting a sharp decline toward $98,000 if support breaks. Historically, September has been one of Bitcoin’s worst-performing months, adding to the bearish outlook.

Despite the short-term caution, demand from ETFs and long-term holders may offer support between $95,000 and $98,000. Longer-term technicals remain intact, with the 200-day average sitting near $95,000.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

NFL adds Microsoft Copilot to sidelines

The NFL has begun deploying Microsoft Copilot across all 32 clubs to support faster and more intelligent decision-making during games. Over 2,500 Surface Copilot+ devices have been distributed to coaches, analysts and staff for use on the sidelines and in the booth.

Teams now have access to AI-powered tools like a GitHub Copilot filter that quickly pulls key moments, such as penalties or fumbles, reducing the need to scrub through footage manually. Microsoft 365 Copilot also supports analysts with real-time trend spotting in Excel dashboards during matches.

To ensure reliability, Microsoft has provided hard-wired carts for connectivity even when Wi-Fi drops. These systems are linked to secure Windows servers managed by the NFL, safeguarding critical game data under various stadium conditions.

Los Angeles Rams head coach Sean McVay said the team has embraced the changes, calling Copilot ‘a valuable tool’ for navigating the pressure of real-time decisions. NFL leadership echoed his optimism, framing AI as essential to the future of the sport.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!