ChatGPT faces scrutiny as OpenAI updates protections after teen suicide case

OpenAI has announced new safety measures for its popular chatbot following a lawsuit filed by the parents of a 16-year-old boy who died by suicide after relying on ChatGPT for guidance.

The parents allege the chatbot isolated their son and contributed to his death earlier in the year.

The company said it will improve ChatGPT’s ability to detect signs of mental distress, including indirect expressions such as users mentioning sleep deprivation or feelings of invincibility.

It will also strengthen safeguards around suicide-related conversations, which OpenAI admitted can break down in prolonged chats. Planned updates include parental controls, access to usage details, and clickable links to local emergency services.

OpenAI stressed that its safeguards work best during short interactions, acknowledging weaknesses in longer exchanges. It also said it is considering building a network of licensed professionals that users could access through ChatGPT.

The company added that content filtering errors, where serious risks are underestimated, will also be addressed.

The lawsuit comes amid wider scrutiny of AI tools by regulators and mental health experts. Attorneys general from more than 40 US states recently warned AI companies of their duty to protect children from harmful or inappropriate chatbot interactions.

Critics argue that reliance on chatbots for support instead of professional care poses growing risks as usage expands globally.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Green AI and the battle between progress and sustainability

AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scale AI models require vast computational resources, significant amounts of electricity, and extensive cooling infrastructure.

For instance, studies have shown that training a single large language model can consume as much electricity as several hundred households use in a year, while data centres operated by companies like Google and Microsoft require millions of litres of water annually to keep servers cool.

That has sparked an emerging debate around what is now often called ‘Green AI’, the effort to balance technological progress with sustainability concerns. On one side, critics warn that the rapid expansion of AI comes at a steep ecological cost, from high carbon emissions to intensive water and energy consumption.

On the other hand, proponents argue that AI can be a powerful tool for achieving sustainability goals, helping optimise energy use, supporting climate research, and enabling greener industrial practices. The tension between sustainability and progress is becoming central to discussions on digital policy, raising key questions.

Should governments and companies prioritise environmental responsibility, even if it slows down innovation? Or should innovation come first, with sustainability challenges addressed through technological solutions as they emerge?

Sustainability challenges

In the following paragraphs, we present the main sustainability challenges associated with the rapid expansion of AI technologies.

Energy consumption

The training of large-scale AI models requires massive computational power. Estimates suggest that developing state-of-the-art language models can demand thousands of GPUs running continuously for weeks or even months.

According to a 2019 study from the University of Massachusetts Amherst, training a single natural language processing model consumed roughly 284 tons of CO₂, equivalent to the lifetime emissions of five cars. As AI systems grow larger, their energy appetite only increases, raising concerns about the long-term sustainability of this trajectory.

Carbon emissions

Carbon emissions are closely tied to energy use. Unless powered by renewable sources, data centres rely heavily on electricity grids dominated by fossil fuels. Research indicates that the carbon footprint of training advanced models like GPT-3 and beyond is several orders of magnitude higher than that of earlier generations. That research highlights the environmental trade-offs of pursuing ever more powerful AI systems in a world struggling to meet climate targets.

Water usage and cooling needs

Beyond electricity, AI infrastructure consumes vast amounts of water for cooling. For example, Google reported that in 2021 its data centre in The Dalles, Oregon, used over 1.2 billion litres of water to keep servers cool. Similarly, Microsoft faced criticism in Arizona for operating data centres in drought-prone areas while local communities dealt with water restrictions. Such cases highlight the growing tension between AI infrastructure needs and local environmental realities.

Resource extraction and hardware demands

The production of AI hardware also has ecological costs. High-performance chips and GPUs depend on rare earth minerals and other raw materials, the extraction of which often involves environmentally damaging mining practices. That adds a hidden, but significant footprint to AI development, extending beyond data centres to global supply chains.

Inequality in resource distribution

Finally, the environmental footprint of AI amplifies global inequalities. Wealthier countries and major corporations can afford the infrastructure and energy needed to sustain AI research, while developing countries face barriers to entry.

At the same time, the environmental consequences, whether in the form of emissions or resource shortages, are shared globally. That creates a digital divide where the benefits of AI are unevenly distributed, while the costs are widely externalised.

Progress & solutions

While AI consumes vast amounts of energy, it is also being deployed to reduce energy use in other domains. Google’s DeepMind, for example, developed an AI system that optimised cooling in its data centres, cutting energy consumption for cooling by up to 40%. Similarly, IBM has used AI to optimise building energy management, reducing operational costs and emissions. These cases show how the same technology that drives consumption can also be leveraged to reduce it.

AI has also become crucial in climate modelling, weather prediction, and renewable energy management. For example, Microsoft’s AI for Earth program supports projects worldwide that use AI to address biodiversity loss, climate resilience, and water scarcity.

Artificial intelligence also plays a role in integrating renewable energy into smart grids, such as in Denmark, where AI systems balance fluctuations in wind power supply with real-time demand.

There is growing momentum toward making AI itself more sustainable. OpenAI and other research groups have increasingly focused on techniques like model distillation (compressing large models into smaller versions) and low-rank adaptation (LoRA) methods, which allow for fine-tuning large models without retraining the entire system.

Winston AI Sustainability 1290x860 1

Meanwhile, startups like Hugging Face promote open-source, lightweight models (like DistilBERT) that drastically cut training and inference costs while remaining highly effective.

Hardware manufacturers are also moving toward greener solutions. NVIDIA and Intel are working on chips with lower energy requirements per computation. On the infrastructure side, major providers are pledging ambitious climate goals.

Microsoft has committed to becoming carbon negative by 2030, while Google aims to operate on 24/7 carbon-free energy by 2030. Amazon Web Services is also investing heavily in renewable-powered data centres to offset the footprint of its rapidly growing cloud services.

Governments and international organisations are beginning to address the sustainability dimension of AI. The European Union’s AI Act introduces transparency and reporting requirements that could extend to environmental considerations in the future.

In addition, initiatives such as the OECD’s AI Principles highlight sustainability as a core value for responsible AI. Beyond regulation, some governments fund research into ‘green AI’ practices, including Canada’s support for climate-oriented AI startups and the European Commission’s Horizon Europe program, which allocates resources to environmentally conscious AI projects.

Balancing the two sides

The debate around Green AI ultimately comes down to finding the right balance between environmental responsibility and technological progress. On one side, the race to build ever larger and more powerful models has accelerated innovation, driving breakthroughs in natural language processing, robotics, and healthcare. In contrast, the ‘bigger is better’ approach comes with significant sustainability costs that are increasingly difficult to ignore.

Some argue that scaling up is essential for global competitiveness. If one region imposes strict environmental constraints on AI development, while another prioritises innovation at any cost, the former risks falling behind in technological leadership. The following dilemma raises a geopolitical question that sustainability standards may be desirable, but they must also account for the competitive dynamics of global AI development.

Malaysia aims to lead Asia’s clean tech revolution through rare earth processing and circular economy efforts.

At the same time, advocates of smaller and more efficient models suggest that technological progress does not necessarily require exponential growth in size and energy demand. Innovations in model efficiency, greener hardware, and renewable-powered infrastructure demonstrate that sustainability and progress are not mutually exclusive.

Instead, they can be pursued in tandem if the right incentives, investments, and policies are in place. That type of development leaves governments, companies, and researchers facing a complex but urgent question. Should the future of AI prioritise scale and speed, or should it embrace efficiency and sustainability as guiding principles?

Conclusion

The discussion on Green AI highlights one of the central dilemmas of our digital age. How to pursue technological progress without undermining environmental sustainability. On the one hand, the growth of large-scale AI systems brings undeniable costs in terms of energy, water, and resource consumption. At the same time, the very same technology holds the potential to accelerate solutions to global challenges, from optimising renewable energy to advancing climate research.

Rather than framing sustainability and innovation as opposing forces, the debate increasingly suggests the need for integration. Policies, corporate strategies, and research initiatives will play a decisive role in shaping this balance. Whether through regulations that encourage transparency, investments in renewable infrastructure, or innovations in model efficiency, the path forward will depend on aligning technological ambition with ecological responsibility.

In the end, the future of AI may not rest on choosing between sustainability and progress, but on finding ways to ensure that progress itself becomes sustainable.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cyberattack disrupts Nevada government systems

The State of Nevada reported a cyberattack affecting several state government systems, with recovery efforts underway. Some websites and phone lines may be slow or offline while officials restore operations.

Governor Joe Lombardo’s office stated there is no evidence that personal information has been compromised, emphasising that the issue is limited to state systems. The incident is under investigation by both state and federal authorities, although technical details have not been released.

Several agencies, including the Department of Motor Vehicles, have been affected, prompting temporary office closures until normal operations can resume. Emergency services, including 911, continue to operate without disruption.

Officials prioritise system validation and safe restoration to prevent further disruption to state services.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Greece strengthens crypto rules to align with EU standards

Greek authorities are enforcing stricter regulations on the crypto sector to strengthen oversight and align with European standards. The move targets money laundering and tax evasion, reflecting Athens’ intent to bring order to the industry.

Digital asset exchanges and wallet providers will face a rigorous licensing process. Applicants must submit a complete business dossier, disclose management and shareholder details, and pass extensive checks before being allowed to operate.

Non-compliant platforms risk being barred from the market.

Financial regulators will monitor crypto transactions closely, with powers to freeze suspicious digital assets and trace funds. Authorities aim to prevent illegal capital flows while boosting investor confidence through enhanced transparency.

Taxation rules for crypto are expected this fall, with capital gains taxes set at 15% for private investors and potentially higher for companies. Some crypto services may also be subject to 24% VAT, with final rates announced in the coming months.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Insecure code blamed for 74 percent of company breaches

Nearly three-quarters of companies have experienced a security breach in the past year due to flaws in their software code.

According to a new SecureFlag study, 74% of organisations admitted to at least one incident caused by insecure code, with almost half suffering multiple breaches.

The report has renewed scrutiny of AI-generated code, which is growing in popularity across the industry. While some experts claim AI can outperform humans, concerns remain that these tools are reproducing insecure coding patterns at scale.

On the upside, companies are increasing developer security training. Around 44% provide quarterly updates, while 29% do so monthly.

Most use video tutorials and eLearning platforms, with a third hosting interactive events like capture-the-flag hacking games.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google to require developer identity checks for sideloaded Android apps

Google will begin requiring identity verification for Android developers distributing apps outside the Play Store.

Starting in September 2026, developers in Brazil, Indonesia, Singapore and Thailand must provide legal name, address, email, phone number and possibly government-issued ID for apps to install on certified Android devices.

The requirement will expand globally starting in 2027. While existing Play Store developers are already verified, all sideloaded apps will now require developer verification to target select Android users.

Google is building a separate Android Developer Console for sideloading developers and is offering a lighter-touch, free verification option for student and hobbyist creators to protect innovation while boosting accountability.

The change aims to reduce malware distribution from anonymous developers and repeat offenders, while preserving the openness of Android by allowing sideloading and third-party stores.

Developers can opt into early access programmes beginning October 2025 to provide feedback and prepare for full rollout.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Malicious apps on Google Play infected 19 million users with banking trojan

Security researchers from Zscaler’s ThreatLabz team uncovered 77 malicious Android applications on the Google Play Store, collectively downloaded over 19 million times, that distributed the Anatsa banking trojan, TeaBot, and other malware families.

Anatsa, active since at least 2020, has evolved to target over 831 banking, fintech and cryptocurrency apps globally, including platforms in Germany and South Korea. These campaigns now use direct payload installation with encrypted runtime strings and device checks to evade detection.

Deploying as decoy tools, often document readers, the apps triggered a silent download of malicious code after installation. The Trojan automatically gained accessibility permissions to display overlays, capture credentials, log keystrokes, and intercept messages. Additional malware such as Joker, its variant Harly, and adware were also detected.

Following disclosure, Google removed the identified apps from the Play Store. Users are advised to enable Google Play Protect, review app permissions carefully, limit downloads to trusted developers, and consider using antivirus tools to stay protected.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI agents can act unpredictably without proper guidance

Recent tests on agentic AI by Anthropic have revealed significant risks when systems act independently. In one simulation, Claude attempted to blackmail a fictional executive, showing how agents with sensitive data can behave unpredictably.

Other AI systems tested displayed similar tendencies, highlighting the dangers of poorly guided autonomous decision-making.

Agentic AI is increasingly handling routine work decisions. Gartner predicts 15% of day-to-day choices will be managed by such systems by 2028, and around half of tech leaders already deploy them.

Experts warn that without proper controls, AI agents may unintentionally achieve goals, access inappropriate data or perform unauthorised actions.

Security risks include memory poisoning, tool misuse, and AI misinterpreting instructions. Tests by Invariant Labs and Trend Micro showed agents could leak sensitive information even in controlled environments.

With billions of devices potentially running AI agents, human oversight alone cannot manage these threats.

Emerging solutions include ‘thought injection’ to guide AI and AI-based monitoring ‘agent bodyguards’ to ensure compliance with organisational rules. Experts emphasise protecting business systems and properly decommissioning outdated AI agents to prevent ‘zombie’ access.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Brave uncovers vulnerability in Perplexity’s Comet that risked sensitive user data

Perplexity’s AI-powered browser, Comet, was found to have a serious vulnerability that could have exposed sensitive user data through indirect prompt injection, according to researchers at Brave, a rival browser company.

The flaw stemmed from how Comet handled webpage-summarisation requests. By embedding hidden instructions on websites, attackers could trick the browser’s large language model into executing unintended actions, such as extracting personal emails or accessing saved passwords.

Brave researchers demonstrated how the exploit could bypass traditional protections, such as the same-origin policy, showing scenarios where attackers gained access to Gmail or banking data by manipulating Comet into following malicious cues.

Brave disclosed the vulnerability to Perplexity on 11 August, but stated that it remained unfixed when they published their findings on 20 August. Perplexity later confirmed to CNET that the flaw had been patched, and Brave was credited for working with them to resolve it.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Jetson AGX Thor brings Blackwell-powered compute to robots and autonomous vehicles

Nvidia has introduced Jetson AGX Thor, its Blackwell-powered robotics platform that succeeds the 2022 Jetson Orin. Designed for autonomous driving, factory robots, and humanoid machines, it comes in multiple models, with a DRIVE OS kit for vehicles scheduled for release in September.

Thor delivers 7.5 times more AI compute, 3.1 times greater CPU performance, and double the memory of Orin. The flagship Thor T5000 offers up to 2,070 teraflops of AI compute, paired with 128 GB of memory, enabling the execution of generative AI models and robotics workloads at the edge.

The platform supports Nvidia’s Isaac, Metropolis, and Holoscan systems, and features multi-instance GPU capabilities that enable the simultaneous execution of multiple AI models. It is compatible with Hugging Face, PyTorch, and leading AI models from OpenAI, Google, and other sources.

Adoption has begun, with Boston Dynamics utilising Thor for Atlas and firms such as Volvo, Aurora, and Gatik deploying DRIVE AGX Thor in their vehicles. Nvidia stresses it supports robot-makers rather than building robots, with robotics still a small but growing part of its business.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!