Poll manipulation by AI threatens democratic accuracy, according to a new study

Public opinion surveys face a growing threat as AI becomes capable of producing highly convincing fake responses. New research from Dartmouth shows that AI-generated answers can pass every quality check, imitate real human behaviour and alter poll predictions without leaving evidence.

In several major polls conducted before the 2024 US election, inserting only a few dozen synthetic responses would have reversed expected outcomes.

The study reveals how easily malicious actors could influence democratic processes. AI models can operate in multiple languages yet deliver flawless English answers, allowing foreign groups to bypass detection.

An autonomous synthetic respondent that was created for the study passed nearly all attention tests, avoided errors in logic puzzles and adjusted its tone to match assigned demographic profiles instead of exposing its artificial nature.

The potential consequences extend far beyond electoral polling. Many scientific disciplines rely heavily on survey data to track public health risks, measure consumer behaviour or study mental wellbeing.

If AI-generated answers infiltrate such datasets, the reliability of thousands of studies could be compromised, weakening evidence used to shape policy and guide academic research.

Financial incentives further raise the risk. Human participants earn modest fees, while AI can produce survey responses at almost no cost. Existing detection methods failed to identify the synthetic respondent at any stage.

The researcher urges survey companies to adopt new verification systems that confirm the human identity of participants, arguing that stronger safeguards are essential to protect democratic accountability and the wider research ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

AI energy demand strains electrical grids

Microsoft CEO Satya Nadella recently delivered a key insight, stating that the biggest hurdle to deploying new AI solutions is now electrical power, not chip supply. The massive energy requirements for running large language models (LLMs) have created a critical bottleneck for major cloud providers.

Nadella specified that Microsoft currently has a ‘bunch of chips sitting in inventory’ that cannot be plugged in and utilised. The problem is a lack of ‘warm shells’, meaning data centre buildings that are fully equipped with the necessary power and cooling capacity.

The escalating power requirements of AI infrastructure are placing extreme pressure on utility grids and capacity. Projections from the Lawrence Berkeley National Laboratory indicate that US data centres could consume up to 12 percent of the nation’s total electricity by 2028.

The disclosure should serve as a warning to investors, urging them to evaluate the infrastructure challenges alongside AI’s technological promise. This energy limitation could create a temporary drag on the sector, potentially slowing the massive projected returns on the $5 trillion investment.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI threatens global knowledge diversity

AI systems are increasingly becoming the primary source of global information, yet they rely heavily on datasets dominated by Western languages and institutions.

Such reliance creates significant blind spots that threaten to erase centuries of indigenous wisdom and local traditions not currently found in digital archives.

Dominant language models often overlook oral histories and regional practices, including specific ecological knowledge essential for sustainable living in tropical climates.

Experts warn of a looming ‘knowledge collapse’ where alternative viewpoints fade away simply because they are statistically less prevalent in training data.

Future generations may find themselves disconnected from vital human insights as algorithms reinforce a homogenised worldview through recursive feedback loops.

Preserving diverse epistemologies remains crucial for addressing global challenges, such as the climate crisis, rather than relying solely on Silicon Valley’s version of intelligence.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Electricity bills surge as data centres drive up costs across the US

Massive new data centres, built to power the AI industry, are being blamed for a dramatic rise in electricity costs across the US. Residential utility bills in states with high concentrations of these facilities, such as Virginia and Illinois, are surging far beyond the national average.

The escalating energy demand has caused a major capacity crisis on large grids like the PJM Interconnection, with data centre load identified as the primary reason for a multi-billion pound spike in future power costs. These extraordinary increases are being passed directly to consumers, making affordability a central issue for politicians ahead of upcoming elections.

Lawmakers are now targeting tech companies and AI labs, promising to challenge what they describe as ‘sweetheart deals’ and to make the firms contribute more to the infrastructure they rely upon.

Although rising costs are also attributed to an ageing grid and inflation, experts warn that utility bills are unlikely to decrease this decade due to the unprecedented demand from rapid data centre expansion.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

ALX and Anthropic partner with Rwanda on AI education

A landmark partnership between ALX, Anthropic, and the Government of Rwanda has launched a major AI learning initiative across Africa.

The program introduces ‘Chidi’, an AI-powered learning companion built on Anthropic’s Claude model. Instead of providing direct answers, the system is designed to guide learners through critical thinking and problem-solving, positioning African talent at the centre of global tech innovation.

An initiative, described as one of the largest AI-enhanced education deployments on the continent, that will see Chidi integrated into Rwanda’s public education system. A pilot phase will involve up to 2,000 educators and select civil servants.

According to the partners, the collaboration aims to ensure Africa’s youth become creators of AI technology instead of remaining merely consumers of it.

A three-way collaboration that unites ALX’s training infrastructure, Anthropic’s AI technology, and Rwanda’s progressive digital policy. The working group, the researchers noted, will document insights to inform Rwanda’s national AI policy.

The initiative sets a new standard for inclusive, AI-powered learning, with Rwanda serving as a launch hub for future deployments across the continent.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Cloudflare buys AI platform Replicate

Cloudflare has agreed to purchase Replicate, a platform simplifying the deployment and running of AI models. The technology aims to cut down on GPU hardware and infrastructure needs typically required for complex AI.

The acquisition will integrate Replicate’s extensive library of over 50,000 AI models into the Cloudflare platform. Developers can then access and deploy any AI model globally using just a single line of code for rapid implementation.

Matthew Prince, Cloudflare’s chief executive, stated the acquisition will make his company the ‘most seamless, all-in-one shop for AI development’. The move abstracts away infrastructure complexities so developers can focus only on delivering amazing products.

Replicate had previously raised $40m in venture funding from prominent investors in the US. Integrating Replicate’s community and models with Cloudflare’s global network will create a singular platform for building tomorrow’s next big AI applications.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Abridge AI scribe allegedly gives doctors an hour back daily

A new study led by Yale University confirmed that Abridge’s ambient AI scribe significantly reduces burnout for medical professionals. Clinicians who used the documentation technology experienced a sharp decline in burnout rates over the first thirty days of use.

AI may offer a scalable solution to administrative demands faced by practitioners nationwide. The quality study, published in ‘Jama Network Open’, examined 263 practitioners across six different healthcare systems.

Burnout rates dropped from 51.9 percent to 38.8 percent after the one-month intervention programme. Secondary analysis showed the AI scribes reduced the odds of burnout by a substantial seventy-four percent.

The ambient AI scribe also led to substantial improvements in the clinicians’ cognitive task load. Practitioners reported they were better able to give undivided attention to patients during their clinical consultations.

High documentation demands are increasing clinician attrition, whilst physician shortages multiply across the sector. Reducing the burdensome administrative load is now critical for maintaining quality patient care and professional well-being.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Europe ramps up bid for digital independence

European leaders gathered in Berlin for the Summit on European Digital Sovereignty, where France and Germany unveiled a series of major commitments aimed at boosting the EU’s technological autonomy and competitiveness. The event brought together more than 900 policymakers, industry figures, and researchers from across the bloc to outline new measures aimed to reducing reliance on non-EU technologies, strengthening digital infrastructure, and supporting European innovation.

Paris and Berlin identified seven strategic areas for action, including simplifying the EU digital regulation, strengthening competition in strategic markets, and establishing higher protection standards for Europe’s most sensitive data. The two countries also endorsed the expansion of digital commons, backed the rollout of the EU Digital Identity Wallet, and committed to broadening the use of open-source tools within public administrations.

A new Franco-German task force will work on defining what constitutes a European digital service, developing indicators of sovereignty, and shaping policy tools to reinforce strategic sectors, including cloud services, AI, and cybersecurity.

The summit also highlighted ambitions to make Europe a leader in frontier AI by fostering public-private collaboration and attracting large-scale investment. European tech companies pledged over €12 billion for key digital technologies, signalling a strong private-sector commitment to the sovereignty agenda.

German Chancellor Friedrich Merz and French President Emmanuel Macron both praised the progress made, stressing that Europe must shape its technological future on its own terms and accelerate the development and adoption of homegrown solutions.

With political momentum, cross-border cooperation, and significant financial backing, the summit marked one of the EU’s most coordinated pushes yet to build a secure, competitive, and sovereign digital ecosystem.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

OpenAI accelerates enterprise AI growth after Gartner names it an emerging leader

The US tech firm, OpenAI, gained fresh momentum after being named an Emerging Leader in Generative AI by Gartner. The assessment highlights strong industry confidence in OpenAI’s ability to support companies that want reliable and scalable AI systems.

Enterprise clients have increasingly adopted the company’s tools after significant investment in privacy controls, data governance frameworks and evaluation methods that help organisations deploy AI safely.

More than one million companies now use OpenAI’s technology, driven by workers who request ChatGPT as part of their daily tasks.

Over eight hundred million weekly users arrive already familiar with the tool, which shortens pilot phases and improves returns, rather than slowing transformation with lengthy onboarding. ChatGPT Enterprise has experienced sharp expansion, recording ninefold growth in seats over the past year.

OpenAI views generative AI as a new layer of enterprise infrastructure rather than a peripheral experiment. The next generation of systems is expected to be more collaborative and closely integrated with corporate operations, supporting new ways of working across multiple sectors.

The company aims to help organisations convert AI strategies into measurable results, rather than abstract ambitions.

Executives described the recognition as encouraging, although they stressed that broader progress still lies ahead. OpenAI plans to continue strengthening its enterprise platform, enabling businesses to integrate AI responsibly and at scale.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!

Report calls for new regulations as AI deepfakes threaten legal evidence

US courtrooms increasingly depend on video evidence, yet researchers warn that the legal system is unprepared for an era in which AI can fabricate convincing scenes.

A new report led by the University of Colorado Boulder argues that national standards are urgently needed to guide how courts assess footage generated or enhanced by emerging technologies.

The authors note that judges and jurors receive little training on evaluating altered clips, despite more than 80 percent of cases involving some form of video.

Concerns have grown as deepfakes become easier to produce. A civil case in California collapsed in September after a judge ruled that a witness video was fabricated, and researchers believe such incidents will rise as tools like Sora 2 allow users to create persuasive simulations in moments.

Experts also warn about the spread of the so-called deepfake defence, where lawyers attempt to cast doubt on genuine recordings instead of accepting what is shown.

AI is also increasingly used to clean up real footage and to match surveillance clips with suspects. Such techniques can improve clarity, yet they also risk deepening inequalities when only some parties can afford to use them.

High-profile errors linked to facial recognition have already led to wrongful arrests, reinforcing the need for more explicit courtroom rules.

The report calls for specialised judicial training, new systems for storing and retrieving video evidence and stronger safeguards that help viewers identify manipulated content without compromising whistleblowers.

Researchers hope the findings prompt legal reforms that place scientific rigour at the centre of how courts treat digital evidence as it shifts further into an AI-driven era.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!