Georgia moves to curb AI data centre expansion amid energy concerns

The state of Georgia is emerging as the focal point of a growing backlash against the rapid expansion of data centres powering the US’ AI boom.

Lawmakers in several states are now considering statewide bans, as concerns over energy consumption, water use and local disruption move to the centre of economic and environmental debate.

A bill introduced in Georgia would impose a moratorium on new data centre construction until March next year, giving state and municipal authorities time to establish more explicit regulatory rules.

The proposal arrives after Georgia’s utility regulator approved plans for an additional 10 gigawatts of electricity generation, primarily driven by data centre demand and expected to rely heavily on fossil fuels.

Local resistance has intensified as the Atlanta metropolitan area led the country in data centre construction last year, prompting multiple municipalities to impose their own temporary bans.

Critics argue that rapid development has pushed up electricity bills, strained water supplies and delivered fewer tax benefits than promised. At the same time, utility companies retain incentives to expand generation rather than improve grid efficiency.

The issue has taken on broader political significance as Georgia prepares for key elections that will affect utility oversight.

Supporters of the moratorium frame the pause as a chance for public scrutiny and democratic accountability, while backers of the industry warn that blanket restrictions risk undermining investment, jobs and long-term technological competitiveness.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Monnett highlights EU digital sovereignty in social media

Monnett is a European-built social media platform designed to give people control over their online feeds. Users can choose exactly what they see, prioritise friends’ posts, and opt out of surveillance-style recommendation systems that dominate other networks.

Unlike mainstream platforms, Monnett places privacy first, with no profiling or sale of user data, and private chats protected without being mined for advertising. The platform also avoids “AI slop” or generative AI content shaping people’s feeds, emphasising human-centred interaction.

Created and built in Luxembourg at the heart of Europe, Monnett’s design reflects a growing push for digital sovereignty in the European Union, where citizens, regulators and developers want more control over how their digital spaces are governed and how personal data is treated.

Core features include full customisation of your algorithm, no shadowbans, strong privacy safeguards, and a focus on genuine social connection. Monnett aims to win users who prefer meaningful online interaction over addictive feeds and opaque data practices.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Australia’s social media ban raises concern for social media companies

Australia’s social media ban for under-16s is worrying social media companies. According to the country’s eSafety Commissioner, these companies fear a global trend of banning such apps. In Australia, regulators say major platforms reluctantly resisted the policy, fearing that similar rules could spread internationally.

In Australia, the ban has already led to the closure of 4.7 million child-linked accounts across platforms, including Instagram, TikTok and Snapchat. Authorities argue the measures are necessary to protect children from harmful algorithms and addictive design.

Social media companies operating in Australia, including Meta, say stronger safeguards are needed but oppose a blanket ban. Critics have warned about privacy risks, while regulators insist early data shows limited migration to alternative platforms.

Australia is now working with partners such as the UK to push tougher global standards on online child safety. In Australia, fines of up to A$49.5m may be imposed on companies failing to enforce the rules effectively.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

China gains ground in global AI race

US companies are increasingly adopting Chinese AI models as part of their core technology stacks, raising questions about global leadership in AI. In the US, Pinterest has confirmed it is using Chinese-developed models to improve recommendations and shopping features.

In the US, executives point to open-source Chinese models such as DeepSeek and tools from Alibaba as faster, cheaper and easier to customise. US firms say these models can outperform proprietary alternatives at a fraction of the cost.

Adoption extends beyond Pinterest in the US, with Airbnb also relying on Chinese AI to power customer service tools. Data from Hugging Face shows Chinese models frequently rank among the most downloaded worldwide, including across US developers.

Researchers at Stanford University have found Chinese AI capabilities now match or exceed global peers. In the US, firms such as OpenAI and Meta remain focused on proprietary systems, leaving China to dominate open-source AI development.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

IMF chief sounds alarm at Davos 2026 over AI and disruption to entry-level labour

AI has dominated discussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned that labour markets are already undergoing rapid structural disruption.

According to Georgieva, demand for skills is shifting unevenly, with productivity gains benefiting some workers while younger people and first-time job seekers face shrinking opportunities.

Entry-level roles are particularly exposed as AI systems absorb routine and clerical tasks traditionally used to gain workplace experience.

Georgieva described the effect on young workers as comparable to a labour-market tsunami, arguing that reduced access to foundational roles risks long-term scarring for an entire generation entering employment.

IMF research suggests AI could affect roughly 60 percent of jobs in advanced economies and 40 percent globally, with only about half of exposed workers likely to benefit.

For others, automation may lead to lower wages, slower hiring and intensified pressure on middle-income roles lacking AI-driven productivity gains.

At Davos 2026, Georgieva warned that the rapid, unregulated deployment of AI in advanced economies risks outpacing public policy responses.

Without clear guardrails and inclusive labour strategies, she argued that technological acceleration could deepen inequality rather than supporting broad-based economic resilience.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Apple accuses the EU of blocking App Store compliance changes

Apple has accused the European Commission of preventing it from implementing App Store changes designed to comply with the Digital Markets Act, following a €500 million fine for breaching the regulation.

The company claims it submitted a formal compliance plan in October and has yet to receive a response from EU officials.

In a statement, Apple argued that the Commission requested delays while gathering market feedback, a process the company says lasted several months and lacked a clear legal basis.

The US tech giant described the enforcement approach as politically motivated and excessively burdensome, accusing the EU of unfairly targeting an American firm.

The Commission has rejected those claims, saying discussions with Apple remain ongoing and emphasising that any compliance measures must support genuinely viable alternative app stores.

Officials pointed to the emergence of multiple competing marketplaces after the DMA entered into force as evidence of market demand.

Scrutiny has increased following the decision by SetApp mobile to shut down its iOS app store in February, with the developer citing complex and evolving business terms.

Questions remain over whether Apple’s proposed shift towards commission-based fees and expanded developer communication rights will satisfy EU regulators.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ChatGPT model draws scrutiny over Grokipedia citations

OpenAI’s latest GPT-5.2 model has sparked concern after repeatedly citing Grokipedia, an AI-generated encyclopaedia launched by Elon Musk’s xAI, raising fresh fears of misinformation amplification.

Testing by The Guardian showed the model referencing Grokipedia multiple times when answering questions on geopolitics and historical figures.

Launched in October 2025, the AI-generated platform rivals Wikipedia but relies solely on automated content without human editing. Critics warn that limited human oversight raises risks of factual errors and ideological bias, as Grokipedia faces criticism for promoting controversial narratives.

OpenAI said its systems use safety filters and diverse public sources, while xAI dismissed the concerns as media distortion. The episode deepens scrutiny of AI-generated knowledge platforms amid growing regulatory and public pressure for transparency and accountability.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Writers challenge troubling AI assumptions about language and style

A growing unease among writers is emerging as AI tools reshape how language is produced and perceived. Long-established habits, including the use of em dashes and semicolons, are increasingly being viewed with suspicion as machine-generated text becomes more common.

The concern is not opposition to AI itself, but the blurring of boundaries between human expression and automated output. Writers whose work was used to train large language models without consent say stylistic traits developed over decades are now being misread as algorithmic authorship.

Academic and editorial norms are also shifting under this pressure. Teaching practices that once valued rhythm, voice, and individual cadence are increasingly challenged by stricter stylistic rules, sometimes framed as safeguards against sloppy or machine-like writing rather than as matters of taste or craft.

At the same time, productivity tools embedded into mainstream software continue to intervene in the writing process, offering substitutions and revisions that prioritise clarity and efficiency over nuance. Such interventions risk flattening language and discouraging the idiosyncrasies that define human authorship.

As AI becomes embedded in publishing, education, and professional writing, the debate is shifting from detection to preservation. Many writers warn that protecting human voice and stylistic diversity is essential, arguing that affectless, uniform prose would erode creativity and trust.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Hollywood figures back anti-AI campaign

More than 800 creatives in the US have signed an anti-AI campaign accusing big technology companies of exploiting human work. High-profile figures from film and television in the country have backed the initiative, which argues that training AI on creative content without consent amounts to theft.

The campaign was launched by the Human Artistry Campaign, a coalition representing creators, unions and industry groups in the country. Supporters say AI systems should not be allowed to use artistic work without permission and fair compensation.

Actors and filmmakers in the US warned that unchecked AI adoption threatens livelihoods across film, television and music. Campaign organisers said innovation should not come at the expense of creators’ rights or ownership of their work.

The statement adds to growing pressure on lawmakers and technology firms in the US. Creative workers are calling for clearer rules on how AI can be developed and deployed across the entertainment industry.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

LinkedIn phishing campaign exposes dangerous DLL sideloading attack

A multi-faceted phishing campaign is abusing LinkedIn private messages to deliver weaponised malware using DLL sideloading, security researchers have warned. The activity relies on PDFs and archive files that appear trustworthy to bypass conventional security controls.

Attackers contact targets on LinkedIn and send self-extracting archives disguised as legitimate documents. When opened, a malicious DLL is sideloaded into a trusted PDF reader, triggering memory-resident malware that establishes encrypted command-and-control channels.

Using LinkedIn messages increases engagement by exploiting professional trust and bypassing email-focused defences. DLL sideloading allows malicious code to run inside legitimate applications, complicating detection.

The campaign enables credential theft, data exfiltration and lateral movement through in-memory backdoors. Encrypted command-and-control traffic makes containment more difficult.

Organisations using common PDF software or Python tooling face elevated risk. Defenders are advised to strengthen social media phishing awareness, monitor DLL loading behaviour and rotate credentials where compromise is suspected.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!