Google boosts Virginia with $9 billion AI and cloud projects

Alphabet’s Google has confirmed plans to invest $9 billion in Virginia by 2026, strengthening the state’s role as a hub for data infrastructure in the US.

The focus will be on AI and cloud computing, positioning Virginia at the forefront of global technological competition.

The plan includes a new Chesterfield County facility and expansion at existing campuses in Loudoun and Prince William counties. These centres are part of the digital backbone that supports cloud services and AI workloads.

Dominion Energy will supply power for the new Chesterfield project, which may take up to seven years before it is fully operational.

The rapid growth of data centres in Virginia has increased concerns about energy demand. Google said it is working with partners on efficiency and power management solutions and funding community development.

Earlier in August, the company announced a $1 billion initiative to provide every college student in Virginia with one year of free access to its AI Pro plan and training opportunities.

Google’s move follows a broader trend in the technology sector. Microsoft, Amazon, Alphabet, and Meta are expected to spend hundreds of billions of dollars on AI-related projects, with much dedicated to new data centres.

Northern Virginia remains the boom’s epicentre, with Loudoun County earning the name’ Data Centre Alley’ because it has concentrated facilities.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Green AI and the battle between progress and sustainability

AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. The development and deployment of large-scale AI models require vast computational resources, significant amounts of electricity, and extensive cooling infrastructure.

For instance, studies have shown that training a single large language model can consume as much electricity as several hundred households use in a year, while data centres operated by companies like Google and Microsoft require millions of litres of water annually to keep servers cool.

That has sparked an emerging debate around what is now often called ‘Green AI’, the effort to balance technological progress with sustainability concerns. On one side, critics warn that the rapid expansion of AI comes at a steep ecological cost, from high carbon emissions to intensive water and energy consumption.

On the other hand, proponents argue that AI can be a powerful tool for achieving sustainability goals, helping optimise energy use, supporting climate research, and enabling greener industrial practices. The tension between sustainability and progress is becoming central to discussions on digital policy, raising key questions.

Should governments and companies prioritise environmental responsibility, even if it slows down innovation? Or should innovation come first, with sustainability challenges addressed through technological solutions as they emerge?

Sustainability challenges

In the following paragraphs, we present the main sustainability challenges associated with the rapid expansion of AI technologies.

Energy consumption

The training of large-scale AI models requires massive computational power. Estimates suggest that developing state-of-the-art language models can demand thousands of GPUs running continuously for weeks or even months.

According to a 2019 study from the University of Massachusetts Amherst, training a single natural language processing model consumed roughly 284 tons of CO₂, equivalent to the lifetime emissions of five cars. As AI systems grow larger, their energy appetite only increases, raising concerns about the long-term sustainability of this trajectory.

Carbon emissions

Carbon emissions are closely tied to energy use. Unless powered by renewable sources, data centres rely heavily on electricity grids dominated by fossil fuels. Research indicates that the carbon footprint of training advanced models like GPT-3 and beyond is several orders of magnitude higher than that of earlier generations. That research highlights the environmental trade-offs of pursuing ever more powerful AI systems in a world struggling to meet climate targets.

Water usage and cooling needs

Beyond electricity, AI infrastructure consumes vast amounts of water for cooling. For example, Google reported that in 2021 its data centre in The Dalles, Oregon, used over 1.2 billion litres of water to keep servers cool. Similarly, Microsoft faced criticism in Arizona for operating data centres in drought-prone areas while local communities dealt with water restrictions. Such cases highlight the growing tension between AI infrastructure needs and local environmental realities.

Resource extraction and hardware demands

The production of AI hardware also has ecological costs. High-performance chips and GPUs depend on rare earth minerals and other raw materials, the extraction of which often involves environmentally damaging mining practices. That adds a hidden, but significant footprint to AI development, extending beyond data centres to global supply chains.

Inequality in resource distribution

Finally, the environmental footprint of AI amplifies global inequalities. Wealthier countries and major corporations can afford the infrastructure and energy needed to sustain AI research, while developing countries face barriers to entry.

At the same time, the environmental consequences, whether in the form of emissions or resource shortages, are shared globally. That creates a digital divide where the benefits of AI are unevenly distributed, while the costs are widely externalised.

Progress & solutions

While AI consumes vast amounts of energy, it is also being deployed to reduce energy use in other domains. Google’s DeepMind, for example, developed an AI system that optimised cooling in its data centres, cutting energy consumption for cooling by up to 40%. Similarly, IBM has used AI to optimise building energy management, reducing operational costs and emissions. These cases show how the same technology that drives consumption can also be leveraged to reduce it.

AI has also become crucial in climate modelling, weather prediction, and renewable energy management. For example, Microsoft’s AI for Earth program supports projects worldwide that use AI to address biodiversity loss, climate resilience, and water scarcity.

Artificial intelligence also plays a role in integrating renewable energy into smart grids, such as in Denmark, where AI systems balance fluctuations in wind power supply with real-time demand.

There is growing momentum toward making AI itself more sustainable. OpenAI and other research groups have increasingly focused on techniques like model distillation (compressing large models into smaller versions) and low-rank adaptation (LoRA) methods, which allow for fine-tuning large models without retraining the entire system.

Winston AI Sustainability 1290x860 1

Meanwhile, startups like Hugging Face promote open-source, lightweight models (like DistilBERT) that drastically cut training and inference costs while remaining highly effective.

Hardware manufacturers are also moving toward greener solutions. NVIDIA and Intel are working on chips with lower energy requirements per computation. On the infrastructure side, major providers are pledging ambitious climate goals.

Microsoft has committed to becoming carbon negative by 2030, while Google aims to operate on 24/7 carbon-free energy by 2030. Amazon Web Services is also investing heavily in renewable-powered data centres to offset the footprint of its rapidly growing cloud services.

Governments and international organisations are beginning to address the sustainability dimension of AI. The European Union’s AI Act introduces transparency and reporting requirements that could extend to environmental considerations in the future.

In addition, initiatives such as the OECD’s AI Principles highlight sustainability as a core value for responsible AI. Beyond regulation, some governments fund research into ‘green AI’ practices, including Canada’s support for climate-oriented AI startups and the European Commission’s Horizon Europe program, which allocates resources to environmentally conscious AI projects.

Balancing the two sides

The debate around Green AI ultimately comes down to finding the right balance between environmental responsibility and technological progress. On one side, the race to build ever larger and more powerful models has accelerated innovation, driving breakthroughs in natural language processing, robotics, and healthcare. In contrast, the ‘bigger is better’ approach comes with significant sustainability costs that are increasingly difficult to ignore.

Some argue that scaling up is essential for global competitiveness. If one region imposes strict environmental constraints on AI development, while another prioritises innovation at any cost, the former risks falling behind in technological leadership. The following dilemma raises a geopolitical question that sustainability standards may be desirable, but they must also account for the competitive dynamics of global AI development.

Malaysia aims to lead Asia’s clean tech revolution through rare earth processing and circular economy efforts.

At the same time, advocates of smaller and more efficient models suggest that technological progress does not necessarily require exponential growth in size and energy demand. Innovations in model efficiency, greener hardware, and renewable-powered infrastructure demonstrate that sustainability and progress are not mutually exclusive.

Instead, they can be pursued in tandem if the right incentives, investments, and policies are in place. That type of development leaves governments, companies, and researchers facing a complex but urgent question. Should the future of AI prioritise scale and speed, or should it embrace efficiency and sustainability as guiding principles?

Conclusion

The discussion on Green AI highlights one of the central dilemmas of our digital age. How to pursue technological progress without undermining environmental sustainability. On the one hand, the growth of large-scale AI systems brings undeniable costs in terms of energy, water, and resource consumption. At the same time, the very same technology holds the potential to accelerate solutions to global challenges, from optimising renewable energy to advancing climate research.

Rather than framing sustainability and innovation as opposing forces, the debate increasingly suggests the need for integration. Policies, corporate strategies, and research initiatives will play a decisive role in shaping this balance. Whether through regulations that encourage transparency, investments in renewable infrastructure, or innovations in model efficiency, the path forward will depend on aligning technological ambition with ecological responsibility.

In the end, the future of AI may not rest on choosing between sustainability and progress, but on finding ways to ensure that progress itself becomes sustainable.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Cells engineered to produce biological qubit open new quantum frontier

Researchers at the University of Chicago’s Pritzker School of Molecular Engineering have achieved a first-of-its-kind breakthrough by programming living cells to build functional protein qubits.

These quantum bits, created from naturally occurring proteins, can detect signals thousands of times stronger than existing quantum sensors.

The interdisciplinary team, led by co-investigators David Awschalom and Peter Maurer, used a protein similar to the fluorescent marker.

Cells can position it at atomic precision and be employed as a quantum sensor within biological environments.

The findings, published in Nature, suggest this bio-integrated sensor could enable nanoscale MRI to reveal cellular structures like never before and inspire new quantum materials.

However, this advance marks a shift from adapting quantum tools to entering biological systems toward harnessing nature as a quantum platform.

The researchers demonstrated that living systems can overcome the noisy, warm environments that usually hinder quantum technology. The broader implication is a hybrid future in which cells carry out life’s functions and behave as quantum instruments for scientific discovery.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

UNGA adopts terms of reference for AI Scientific Panel and Global Dialogue on AI governance

On 26 August 2025, following several months of negotiations in New York, the UN General Assembly (UNGA) adopted a resolution (A/RES/79/325) outlining the terms of reference and modalities for the establishment and functioning of two new AI governance mechanisms: an Independent International Scientific Panel on AI and a Global Dialogue on AI Governance. The creation of these mechanisms was formally agreed by UN member states in September 2024, as part of the Global Digital Compact

The 40-member Scientific Panel has the main task of ‘issuing evidence-based scientific assessments synthesising and analysing existing research related to the opportunities, risks and impacts of AI’, in the form of one annual ‘policy-relevant but non-prescriptive summary report’ to be presented to the Global Dialogue.

The Panel will also ‘provide updates on its work up to twice a year to hear views through an interactive dialogue of the plenary of the General Assembly with the Co-Chairs of the Panel’. The UN Secretary-General is expected to shortly launch an open call for nominations for Panel members; he will then recommend a list of 40 members to be appointed by the General Assembly. 

The Global Dialogue on AI Governance, to involve governments and all relevant stakeholders, will function as a platform ‘to discuss international cooperation, share best practices and lessons learned, and to facilitate open, transparent and inclusive discussions on AI governance with a view to enabling AI to contribute to the implementation of the Sustainable Development Goals and to closing the digital divides between and within countries’. It will be convened annually, for up to two days, in the margins of existing relevant UN conferences and meetings, alternating between Geneva and New York. Each meeting will consist of a multistakeholder plenary meeting with a high-level governmental segment, a presentation of the panel’s annual report, and thematic discussions. 

The Dialogue will be launched during a high-level multistakeholder informal meeting in the margins of the high-level week of UNGA’s 80th session (starting in September 2025). The Dialogue will then be held in the margins of the International Telecommunication Union AI  for Good Global Summit in Geneva, in 2026, and of the multistakeholder forum on science, technology and innovation for the Sustainable Development Goals in New York, in 2027.

The General Assembly also decided that ‘the Co-Chairs of the second Dialogue will hold intergovernmental consultations to agree on common understandings on priority areas for international AI governance, taking into account the summaries of the previous Dialogues and contributions from other stakeholders, as an input to the high-level review of the Global Digital Compact and to further discussions’.

The provision represents the most significant change compared to the previous version of the draft resolution (rev4), which was envisioning intergovernmental negotiations, led by the co-facilitators of the high-level review of the GDC, on a ‘declaration reflecting common understandings on priority areas for international AI governance’. An earlier draft (rev3) was talking about a UNGA resolution on AI governance, which proved to be a contentious point during the negotiations.

To enable the functioning of these mechanisms, the Secretary-General is requested to ‘facilitate, within existing resources and mandates, appropriate Secretariat support for the Panel and the Dialogue by leveraging UN system-wide capacities, including those of the Inter-Agency Working Group on AI’.

States and other stakeholders are encouraged to ‘support the effective functioning of the Panel and Dialogue, including by facilitating the participation of representatives and stakeholders of developing countries by offering travel support, through voluntary contributions that are made public’. 

The continuation of the terms of reference of the Panel and the Dialogue may be considered and decided upon by UNGA during the high-level review of the GDC, at UNGA 82. 

***

The Digital Watch observatory has followed the negotiations on this resolution and published regular updates:

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Travellers claim ChatGPT helps cut flight costs by hundreds of pounds

ChatGPT is increasingly used as a travel assistant, with some travellers claiming it can save hundreds of pounds on flights. Finance influencer Casper Opala shares cost-saving tips online and said the AI tool helped him secure a flight for £70 that initially cost more than £700.

Opala shared a series of prompts that allow ChatGPT to identify hidden routes, budget airlines not listed on major platforms, and potential savings through alternative airports or separate bookings. He also suggested using the tool to monitor prices for several days or compare one-way fares with return tickets.

While many money-saving tricks have existed for years, ChatGPT condenses the process, collecting results in seconds. Opala says this efficiency is a strong starting point for cheaper travel deals.

Experts, however, warn that ChatGPT is not connected to live flight booking systems. TravelBook’s Laura Pomer noted that the AI can sometimes present inaccurate or outdated fares, meaning users should always verify results before booking.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Jetson AGX Thor brings Blackwell-powered compute to robots and autonomous vehicles

Nvidia has introduced Jetson AGX Thor, its Blackwell-powered robotics platform that succeeds the 2022 Jetson Orin. Designed for autonomous driving, factory robots, and humanoid machines, it comes in multiple models, with a DRIVE OS kit for vehicles scheduled for release in September.

Thor delivers 7.5 times more AI compute, 3.1 times greater CPU performance, and double the memory of Orin. The flagship Thor T5000 offers up to 2,070 teraflops of AI compute, paired with 128 GB of memory, enabling the execution of generative AI models and robotics workloads at the edge.

The platform supports Nvidia’s Isaac, Metropolis, and Holoscan systems, and features multi-instance GPU capabilities that enable the simultaneous execution of multiple AI models. It is compatible with Hugging Face, PyTorch, and leading AI models from OpenAI, Google, and other sources.

Adoption has begun, with Boston Dynamics utilising Thor for Atlas and firms such as Volvo, Aurora, and Gatik deploying DRIVE AGX Thor in their vehicles. Nvidia stresses it supports robot-makers rather than building robots, with robotics still a small but growing part of its business.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Vietnam accelerates modernization of foreign affairs through technology and AI

The Ministry of Foreign Affairs of Vietnam spearheads an extensive digital transformation initiative in line with the Politburo’s Resolution No. 57-NQ/TW issued in December 2024. This resolution highlights the necessity of advancements in science, technology, and national digital transformation.

Under the guidance of Deputy Prime Minister and Minister Bui Thanh Son, the Ministry is committed to modernising its operations and improving efficiency, reflecting Vietnam’s broader digital evolution strategy across all sectors.

Key implementations of this transformation include the creation of three major digital platforms: an electronic information portal providing access to foreign policies and online public services, an online document management system for internal digitalisation, and an integrated data-sharing platform for connectivity and multi-dimensional data exchange.

The Ministry has digitised 100% of its administrative procedures, linking them to a national-level system, showcasing a significant stride towards administrative reform and efficiency. Additionally, the Ministry has fully adopted social media channels, including Facebook and Twitter, indicating its efforts to enhance foreign information dissemination and public engagement.

A central component of this initiative is the ‘Digital Literacy for All’ movement, inspired by President Ho Chi Minh’s historic ‘Popular Education’ campaign. This movement focuses on equipping diplomatic personnel with essential digital skills, transforming them into proficient ‘digital civil servants’ and ‘digital ambassadors.’ The Ministry aims to enhance its diplomatic functions in today’s globally connected environment by advancing its ability to navigate and utilise modern technologies.

The Ministry plans to develop its digital infrastructure further, strengthen data management, and integrate AI for strategic planning and predictive analysis.

Establishing a digital data warehouse for foreign information and enhancing human resources by nurturing technology experts within the diplomatic sector are also on the agenda. These actions reflect a strong commitment to fostering a professional and globally adept diplomatic industry, poised to safeguard national interests and thrive in the digital age.

Would you like to learn more about AI, tech and digital diplomacyIf so, ask our Diplo chatbot!

Humain Chat has been unveiled by Saudi Arabia to drive AI innovation

Saudi Arabia has taken a significant step in AI with the launch of Humain Chat, an app powered by one of the world’s most enormous Arabic-trained datasets.

Developed by state-backed venture Humain, the app is designed to strengthen the country’s role in AI while promoting sovereign technologies.

Built on the Allam large language model, Humain Chat allows real-time web search, speech input across Arabic dialects, bilingual switching between Arabic and English, and secure data compliance with Saudi privacy laws.

The app is already available on the web, iOS, and Android in Saudi Arabia, with plans for regional expansion across the Middle East before reaching global markets.

Humain was established in May under the leadership of Crown Prince Mohammed bin Salman and the Public Investment Fund. Its flagship model, ALLAM 34B, is described as the most advanced AI system created in the Arab world. The company said the app will evolve further as user adoption grows.

CEO Tareq Amin called the launch ‘a historic milestone’ for Saudi Arabia, stressing that Humain Chat shows how advanced AI can be developed in Arabic while staying culturally rooted and built by local expertise.

A team of 120 specialists based in the Kingdom created the platform.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

New WhatsApp features help manage unwanted groups

WhatsApp is expanding its tools to give users greater control over the groups they join and the conversations they take part in.

When someone not saved in a user’s contacts adds them to a group, WhatsApp now provides details about that group so they can immediately decide whether to stay or leave. If a user chooses to exit, they can also report the group directly to WhatsApp.

Privacy settings allow people to decide who can add them to groups. By default, the setting is set to ‘Everyone,’ but it can be adjusted to ‘My contacts’ or ‘My contacts except…’ for more security. Messages within groups can also be reported individually, with users having the option to block the sender.

Reported messages and groups are sent to WhatsApp for review, including the sender’s or group’s ID, the time the message was sent, and the message type.

Although blocking an entire group is impossible, users can block or report individual members or administrators if they are sending spam or inappropriate content. Reporting a group will send up to five recent messages from that chat to WhatsApp without notifying other members.

Exiting a group remains straightforward: users can tap the group name and select ‘Exit group.’ With these tools, WhatsApp aims to strengthen user safety, protect privacy, and provide better ways to manage unwanted interactions.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Bitcoin price drops after whale sell-off while Ethereum holds

Bitcoin price weakened sharply after a $2.7 billion whale sell-off sparked automated liquidations, pushing the cryptocurrency toward key support near $110,500. Over $846 million in liquidations doubled the total crypto capitalisation to about $3.83 trillion.

Indicators suggest short-term volatility and choppy price action.

Technical metrics highlight the divergence between Bitcoin and Ethereum. Bitcoin’s ADX at 16 and RSI near 42 signal low trend conviction and growing selling pressure, while the Squeeze Momentum Indicator points to potential volatility ahead.

Ethereum remains comparatively resilient, with an ADX around 41, a bullish 50–200 EMA spread, and RSI near 59, supporting continued positive momentum.

Traders are advised to emphasise risk management amid elevated uncertainty. Key Bitcoin support levels sit at $110,500 and $107,000–$107,600, with resistance at $116,000 and $120,000. Ethereum support ranges from $4,194 to $4,400, while immediate resistance reaches $4,954.

Tightening stop-losses, reducing leverage, and waiting for confirmed volatility resolution are recommended before initiating new positions.

The recent whale-induced volatility demonstrates how a large order can swiftly impact market dynamics. While Bitcoin shows fragile trend conditions, Ethereum’s technical strength provides a measure of stability.

Monitoring indicators and key levels remains essential for navigating the current environment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot