The state of Georgia is emerging as the focal point of a growing backlash against the rapid expansion of data centres powering the US’ AI boom.
Lawmakers in several states are now considering statewide bans, as concerns over energy consumption, water use and local disruption move to the centre of economic and environmental debate.
A bill introduced in Georgia would impose a moratorium on new data centre construction until March next year, giving state and municipal authorities time to establish more explicit regulatory rules.
The proposal arrives after Georgia’s utility regulator approved plans for an additional 10 gigawatts of electricity generation, primarily driven by data centre demand and expected to rely heavily on fossil fuels.
Local resistance has intensified as the Atlanta metropolitan area led the country in data centre construction last year, prompting multiple municipalities to impose their own temporary bans.
Critics argue that rapid development has pushed up electricity bills, strained water supplies and delivered fewer tax benefits than promised. At the same time, utility companies retain incentives to expand generation rather than improve grid efficiency.
The issue has taken on broader political significance as Georgia prepares for key elections that will affect utility oversight.
Supporters of the moratorium frame the pause as a chance for public scrutiny and democratic accountability, while backers of the industry warn that blanket restrictions risk undermining investment, jobs and long-term technological competitiveness.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Rising transatlantic tensions have reignited concerns over Europe’s heavy reliance on US Big Tech, exposing vulnerabilities across cloud services, AI, and digital infrastructure.
European lawmakers are increasingly pushing for homegrown alternatives, warning that excessive dependence on a small group of foreign providers threatens economic resilience, public services, and technological sovereignty.
European Parliament data shows over 80 percent of the EU’s digital products and infrastructure come from outside the bloc, with US firms dominating cloud and AI.
Officials warn the concentration increases geopolitical, cyber and supply risks, driving renewed efforts to boost Europe’s digital autonomy and competitiveness.
Initiatives such as Eurostack and rising open-source investment aim to build digital independence, though analysts say real sovereignty could take a decade and vast funding.
While policymakers accept that full decoupling from US technology remains unrealistic, pressure is mounting for governments and public institutions to prioritise European solutions and treat digital infrastructure as a strategic asset.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Firefighting is entering a new era with HEN Technologies. Founder Sunny Sethi has developed nozzles that extinguish fires up to three times faster while conserving two-thirds of water.
HEN’s products include nozzles, valves, monitors, and sprinklers equipped with sensors and smart circuits. A cloud platform tracks water flow, pressure, GPS, and weather conditions, allowing fire departments to respond efficiently and manage resources effectively.
Predictive analytics built on this data provide real-time insights for incident commanders. Firefighters can anticipate wind shifts, monitor water usage, and optimise operations, attracting interest from the Department of Homeland Security and military agencies worldwide.
Commercial adoption has been rapid, with revenue rising from $200,000 in 2023 to a projected $20 million this year. Serving 1,500 clients globally and filing 20 patents, HEN is also collecting real-world fire data that could support AI models simulating extreme environments.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Creative communities are pushing back against generative AI in literature and art. The Science Fiction and Fantasy Writers Association now bars works created wholly or partly with large language models after criticism of earlier, more permissive rules.
San Diego Comic-Con faced controversy when it initially allowed AI-generated art in its exhibition, but not for sale. Artists argued that the rules threatened originality, prompting organisers to ban all AI-created material.
Authors warn that generative AI undermines the creative process. Some point out that large language model tools are already embedded in research and writing software, raising concerns about accidental disqualification from awards.
Fans and members welcomed SFWA’s decision, but questions remain about how broadly AI usage will be defined. Many creators insist that machines cannot replicate storytelling and artistic skill.
Industry observers expect other cultural organisations to follow similar policies this year. The debate continues over ethics, fairness, and technology’s role in arts and literature.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
New measures are being introduced in west Northamptonshire with the deployment of an AI-powered CCTV tower to combat fly-tipping in known hotspots. The mobile system will be rotated between locations until January 2027 to improve detection and deterrence.
Fly-tipping remains a significant issue across the area, with more than 21,000 incidents cleared between April 2024 and March 2025. Local authorities say illegal dumping damages neighbourhoods, harms wildlife and places a heavy financial burden on taxpayers.
The tower uses 360-degree cameras and AI to monitor activity and identify offences as they occur. Automatic number plate recognition allows enforcement officers to link incidents to suspected vehicles more quickly.
Council leaders say a similar scheme in Dartford have reduced fly-tipping and believe the technology sends a strong message to offenders. Residents are encouraged to report incidents through the council website or smartphone app to support enforcement efforts.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.
Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.
Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.
Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.
In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.
Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.
Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Gulf states are accelerating AI investment to drive diversification, while regulators struggle to keep pace with rapid technological change. Saudi Arabia, the UAE, and Qatar are deploying AI across key sectors while pursuing regional leadership in digital innovation.
Despite political commitment and large-scale funding, policymakers struggle to balance innovation with risk management. AI’s rapid pace and global reach strain governance, while foreign tech reliance raises sovereignty and security risks.
Corporate influence, intensifying geopolitical competition, and the urgent race to attract foreign capital further complicate oversight efforts, constraining regulators’ ability to impose robust and forward-looking governance frameworks.
With AI increasingly viewed as a source of economic and strategic power, Gulf governments face a narrowing window to establish effective regulatory frameworks before the technology becomes deeply embedded across critical infrastructure.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Nike has launched an internal investigation following claims by the WorldLeaks cybercrime group that company data was stolen from its systems.
The sportswear giant said it is assessing a potential cybersecurity incident after the group listed Nike on its Tor leak site and published a large volume of files allegedly taken during the intrusion.
WorldLeaks claims to have released approximately 1.4 terabytes of data, comprising more than 188,000 files. The group is known for data theft and extortion tactics, pressuring organisations to pay by threatening public disclosure instead of encrypting systems with ransomware.
The cybercrime operation emerged in 2025 after rebranding from Hunters International, a ransomware gang active since 2023. Increased law enforcement pressure reportedly led the group to abandon encryption-based attacks and focus exclusively on stealing sensitive corporate data.
An incident that adds to growing concerns across the retail and apparel sector, following a recent breach affecting Under Armour that exposed tens of millions of customer records.
Nike has stated that consumer privacy and data protection remain priorities while the investigation continues.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
US companies are increasingly adopting Chinese AI models as part of their core technology stacks, raising questions about global leadership in AI. In the US, Pinterest has confirmed it is using Chinese-developed models to improve recommendations and shopping features.
In the US, executives point to open-source Chinese models such as DeepSeek and tools from Alibaba as faster, cheaper and easier to customise. US firms say these models can outperform proprietary alternatives at a fraction of the cost.
Adoption extends beyond Pinterest in the US, with Airbnb also relying on Chinese AI to power customer service tools. Data from Hugging Face shows Chinese models frequently rank among the most downloaded worldwide, including across US developers.
Researchers at Stanford University have found Chinese AI capabilities now match or exceed global peers. In the US, firms such as OpenAI and Meta remain focused on proprietary systems, leaving China to dominate open-source AI development.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!