Fight over state AI authority heats up in US Congress

US House Republicans are mounting a new effort to block individual states from regulating AI, reviving a proposal that the Senate overwhelmingly rejected just four months ago. Their push aligns with President Donald Trump’s call for a single federal AI standard, which he argues is necessary to avoid a ‘patchwork’ of state-level rules that he claims hinder economic growth and fuel what he described as ‘woke AI.’

House Majority Leader Steve Scalise is now attempting to insert the measure into the National Defence Authorisation Act, a must-pass annual defence spending bill expected to be finalised in the coming weeks. If successful, the move would place a moratorium on state-level AI regulation, effectively ending the states’ current role as the primary rule-setters on issues ranging from child safety and algorithmic fairness to workforce impacts.

The proposal faces significant resistance, including from within the Republican Party. Lawmakers who blocked the earlier attempt in July warned that stripping states of their authority could weaken protections in areas such as copyright, child safety, and political speech.

Critics, such as Senator Marsha Blackburn and Florida Governor Ron DeSantis, argue that the measure would amount to a handout to Big Tech and leave states unable to guard against the use of predatory or intrusive AI.

Congressional leaders hope to reach a deal before the Thanksgiving recess, but the ultimate fate of the measure remains uncertain. Any version of the moratorium would still need bipartisan support in the Senate, where most legislation requires 60 votes to advance.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

NVIDIA pushes forward with AI-ready data

Enterprises are facing growing pressure to prepare unstructured data for use in modern AI systems as organisations struggle to turn prototypes into production tools.

Around forty percent of AI projects advance beyond the pilot phase, largely due to limits in data quality and availability. Most organisational information now comes in unstructured form, ranging from emails to video files, which offers little coherence and places a heavy load on governance systems.

AI agents need secure, recent and reliable data instead of fragmented information scattered across multiple storage silos. Preparing such data demands extensive curation, metadata work, semantic chunking and the creation of vector embeddings.

Enterprises also struggle with the rising speed of data creation and the spread of duplicate copies, which increases both operational cost and security concerns.

An emerging approach by NVIDIA, known as the AI data platform, aims to address these challenges by embedding GPU acceleration directly into the data path. The platform prepares and indexes information in place, allowing enterprises to reduce data drift, strengthen governance and avoid unnecessary replication.

Any change to a source document is immediately reflected in the associated AI representations, improving accuracy and consistency for business applications.

NVIDIA is positioning its own AI Data Platform reference design as a next step for enterprise storage. The design combines RTX PRO 6000 Blackwell Server Edition GPUs, BlueField three DPUs and integrated AI processing pipelines.

Leading technology providers including Cisco, Dell Technologies, IBM, HPE, NetApp, Pure Storage and others have adopted the model as they prepare storage systems for broader use of generative AI in the enterprise sector.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

OpenAI and Intuit expand financial AI collaboration

Yesterday, OpenAI and Intuit announced a major strategic partnership aimed at reshaping how people manage their personal and business finances. The arrangement will allow Intuit apps to appear directly inside ChatGPT, enabling secure and personalised financial actions within a single environment.

An agreement that is worth more than one hundred million dollars and reinforces Intuit’s long-term push to strengthen its AI-driven expert platform.

Intuit will broaden its use of OpenAI’s most advanced models to support financial tasks across its products. Frontier models will help power AI agents that assist with tax preparation, cash flow forecasting, payroll management and wider financial planning.

Intuit will also continue using ChatGPT Enterprise internally so employees can work with greater speed and accuracy.

The partnership is expected to help consumers make more informed financial choices instead of relying on fragmented tools. Users will be able to explore suitable credit offers, receive clearer tax answers, estimate refunds and connect with tax specialists.

Businesses will gain tailored insights based on real time data that can improve cash flow, automate customer follow ups and support more effective outreach through email marketing.

Leaders from both companies argue that the collaboration will give people and firms a meaningful financial advantage. They say greater personalisation, deeper data analysis and more effortless decision making will support stronger household finances and more resilient small enterprises.

The deal expands the growing community of OpenAI enterprise customers and strengthens Intuit’s position in global financial technology.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Google enters a new frontier with Gemini 3

A new phase of its AI strategy has begun for Google with the release of Gemini 3, which arrives as the company’s most advanced model to date.

The new system prioritises deeper reasoning and more subtle multimodal understanding, enabling users to approach difficult ideas with greater clarity instead of relying on repetitive prompting. It marks a major step for Google’s long-term project to integrate stronger intelligence into products used by billions.

Gemini 3 Pro is already available in preview across the Gemini app, AI Mode in Search, AI Studio, Vertex AI and Google’s new development platform known as Antigravity.

A model that performs at the top of major benchmarks in reasoning, mathematics, tool use and multimodal comprehension, offering substantial improvements compared with Gemini 2.5 Pro.

Deep Think mode extends the model’s capabilities even further, reaching new records on demanding academic and AGI-oriented tests, although Google is delaying wider release until additional safety checks conclude.

Users can rely on Gemini 3 to learn complex topics, analyse handwritten material, decode long academic texts or translate lengthy videos into interactive guides instead of navigating separate tools.

Developers benefit from richer interactive interfaces, more autonomous coding agents and the ability to plan tasks over longer horizons.

Google Antigravity enhances this shift by giving agents direct control of the development environment, allowing them to plan, write and validate code independently while remaining under human supervision.

Google emphasises that Gemini 3 is its most extensively evaluated model, supported by independent audits and strengthened protections against manipulation. The system forms the foundation for Google’s next era of agentic, personalised AI and will soon expand with additional models in the Gemini 3 series.

The company expects the new generation to reshape how people learn, build and organise daily tasks instead of depending on fragmented digital services.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand strains electrical grids

Microsoft CEO Satya Nadella recently delivered a key insight, stating that the biggest hurdle to deploying new AI solutions is now electrical power, not chip supply. The massive energy requirements for running large language models (LLMs) have created a critical bottleneck for major cloud providers.

Nadella specified that Microsoft currently has a ‘bunch of chips sitting in inventory’ that cannot be plugged in and utilised. The problem is a lack of ‘warm shells’, meaning data centre buildings that are fully equipped with the necessary power and cooling capacity.

The escalating power requirements of AI infrastructure are placing extreme pressure on utility grids and capacity. Projections from the Lawrence Berkeley National Laboratory indicate that US data centres could consume up to 12 percent of the nation’s total electricity by 2028.

The disclosure should serve as a warning to investors, urging them to evaluate the infrastructure challenges alongside AI’s technological promise. This energy limitation could create a temporary drag on the sector, potentially slowing the massive projected returns on the $5 trillion investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI threatens global knowledge diversity

AI systems are increasingly becoming the primary source of global information, yet they rely heavily on datasets dominated by Western languages and institutions.

Such reliance creates significant blind spots that threaten to erase centuries of indigenous wisdom and local traditions not currently found in digital archives.

Dominant language models often overlook oral histories and regional practices, including specific ecological knowledge essential for sustainable living in tropical climates.

Experts warn of a looming ‘knowledge collapse’ where alternative viewpoints fade away simply because they are statistically less prevalent in training data.

Future generations may find themselves disconnected from vital human insights as algorithms reinforce a homogenised worldview through recursive feedback loops.

Preserving diverse epistemologies remains crucial for addressing global challenges, such as the climate crisis, rather than relying solely on Silicon Valley’s version of intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Electricity bills surge as data centres drive up costs across the US

Massive new data centres, built to power the AI industry, are being blamed for a dramatic rise in electricity costs across the US. Residential utility bills in states with high concentrations of these facilities, such as Virginia and Illinois, are surging far beyond the national average.

The escalating energy demand has caused a major capacity crisis on large grids like the PJM Interconnection, with data centre load identified as the primary reason for a multi-billion pound spike in future power costs. These extraordinary increases are being passed directly to consumers, making affordability a central issue for politicians ahead of upcoming elections.

Lawmakers are now targeting tech companies and AI labs, promising to challenge what they describe as ‘sweetheart deals’ and to make the firms contribute more to the infrastructure they rely upon.

Although rising costs are also attributed to an ageing grid and inflation, experts warn that utility bills are unlikely to decrease this decade due to the unprecedented demand from rapid data centre expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ALX and Anthropic partner with Rwanda on AI education

A landmark partnership between ALX, Anthropic, and the Government of Rwanda has launched a major AI learning initiative across Africa.

The program introduces ‘Chidi’, an AI-powered learning companion built on Anthropic’s Claude model. Instead of providing direct answers, the system is designed to guide learners through critical thinking and problem-solving, positioning African talent at the centre of global tech innovation.

An initiative, described as one of the largest AI-enhanced education deployments on the continent, that will see Chidi integrated into Rwanda’s public education system. A pilot phase will involve up to 2,000 educators and select civil servants.

According to the partners, the collaboration aims to ensure Africa’s youth become creators of AI technology instead of remaining merely consumers of it.

A three-way collaboration that unites ALX’s training infrastructure, Anthropic’s AI technology, and Rwanda’s progressive digital policy. The working group, the researchers noted, will document insights to inform Rwanda’s national AI policy.

The initiative sets a new standard for inclusive, AI-powered learning, with Rwanda serving as a launch hub for future deployments across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare buys AI platform Replicate

Cloudflare has agreed to purchase Replicate, a platform simplifying the deployment and running of AI models. The technology aims to cut down on GPU hardware and infrastructure needs typically required for complex AI.

The acquisition will integrate Replicate’s extensive library of over 50,000 AI models into the Cloudflare platform. Developers can then access and deploy any AI model globally using just a single line of code for rapid implementation.

Matthew Prince, Cloudflare’s chief executive, stated the acquisition will make his company the ‘most seamless, all-in-one shop for AI development’. The move abstracts away infrastructure complexities so developers can focus only on delivering amazing products.

Replicate had previously raised $40m in venture funding from prominent investors in the US. Integrating Replicate’s community and models with Cloudflare’s global network will create a singular platform for building tomorrow’s next big AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot