Europe’s growing demand for cloud and AI services is driving a rapid expansion of data centres across the EU.
Policymakers now face the difficulty of supporting digital growth instead of undermining climate targets, yet reliable sustainability data remains scarce.
Operators are required to report on energy consumption, water usage, renewable sourcing and heat reuse, but only around one-third have submitted complete data so far.
Brussels plans to introduce a rating scheme from 2026 that grades data centres on environmental performance, potentially rewarding the most sustainable new facilities with faster approvals under the upcoming Cloud and AI Development Act.
Industry groups want the rules adjusted so operators using excess server heat to warm nearby homes are not penalised. Experts also argue that stronger auditing and stricter application of standards are essential so reported data becomes more transparent and credible.
Smaller data centres remain largely untracked even though they are often less efficient, while colocation facilities complicate oversight because customers manage their own servers. Idle machines also waste vast amounts of energy yet remain largely unmeasured.
Meanwhile, replacing old hardware may improve efficiency but comes with its own environmental cost.
Even if future centres run on cleaner power and reuse heat, the manufacturing footprint of the equipment inside them remains a major unanswered sustainability challenge.
Policymakers say better reporting is essential if the EU is to balance digital expansion with climate responsibility rather than allowing environmental blind spots to grow.
Would you like to learn more aboutAI, tech and digital diplomacy? If so, ask our Diplo chatbot!
AI is increasingly used for emotional support and companionship, raising questions about the values embedded in its responses, particularly for Christians seeking guidance. Research cited by Harvard Business Review shows therapy-related use now dominates generative AI.
As Christians turn to AI for advice on anxiety, relationships, and personal crises, concerns are growing about the quality and clarity of its responses. Critics warn that AI systems often rely on vague generalities and may lack the moral grounding expected by faith-based users.
A new benchmark released by technology firm Gloo assessed how leading AI models support human flourishing from a Christian perspective. The evaluation examined seven areas, including relationships, meaning, health, and faith, and found consistent weaknesses in how models addressed Christian belief.
The findings show many AI systems struggle with core Christian concepts such as forgiveness and grace. Responses often default to vague spirituality rather than engaging directly with Christian values.
The authors argue that as AI increasingly shapes worldviews, greater attention is needed to how systems serve Christians and other faith communities. They call for clearer benchmarks and training approaches that allow AI to engage respectfully with religious values without promoting any single belief system.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Hangzhou-based biotech start-up MindRank has entered Phase 3 clinical trials for its weight loss drug, marking China’s first AI-assisted Category 1 new drug to reach this stage. The trial involves MDR-001, a small-molecule GLP-1 receptor agonist developed using AI-driven techniques.
MindRank said the weight loss drug was designed to regulate blood sugar and appetite by mimicking natural hormones. According to founder and chief executive Niu Zhangming, the company is targeting regulatory approval in the second half of 2028, with a potential market launch in 2029.
The company said the development process for the weight loss drug took about 4.5 years, significantly shorter than the typical 7 to 10 years required to reach Phase 3 trials. Niu attributed the acceleration to AI tools that reduced research timelines and cut overall R&D costs by more than 60 per cent.
China-based MindRank uses proprietary AI systems, including large language models (LLMs), to identify weight-loss drug targets and shortlist compounds. The approach has raised target research accuracy above 97 per cent and supports safety and efficacy assessments.
Despite these advances, Niu said human expertise remains essential for strategic decision-making and integrating workflows. He added that AI-assisted drug discovery still faces long validation cycles, meaning its impact on life sciences may be more gradual than in other sectors.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Samsung will open its CES 2026 presence with a Sunday evening press conference focused on integrating AI across its product portfolio. The event will take place on 4 January at the Wynn in Las Vegas and will be livestreamed online.
Senior executives, including TM Roh, head of the Device eXperience division, and leaders from Samsung’s visual display and digital appliance businesses, are expected to outline the company’s AI strategy. Samsung says the presentation will emphasise AI as a core layer across products and services.
The company has already previewed several AI-enabled devices ahead of CES. The devices include a portable projector that adapts to its surroundings, expanded Google Photos integration on Samsung TVs, and new Micro RGB television displays.
The company is also highlighting AI-powered home appliances designed to anticipate user needs. Examples include refrigerators that track food supplies, generate shopping lists, and detect early signs of device malfunction.
New smartphones are not expected at the event, with the next Galaxy Unpacked launch reportedly scheduled for later in January or early February.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A large energy and AI campus is taking shape outside Amarillo, Texas, as startup Fermi America plans to build what it says would be the world’s largest private power grid. The project aims to support large-scale AI training using nuclear, gas, and solar power.
Known as Project Matador, the development would host millions of square metres of data centres and generate more electricity than many US states consume at peak demand. The site is near the Pantex nuclear weapons facility and is part of a broader push for US energy and AI dominance.
Fermi is led by former Texas governor and energy secretary Rick Perry alongside investor Toby Neugebauer. The company plans to deploy next-generation nuclear reactors and offer off-grid computing infrastructure, though it has yet to secure a confirmed anchor tenant.
The scale and cost of the project have raised questions among analysts and local residents. Critics point to financing risks, water use, and the challenge of delivering nuclear reactors on time and within budget, while supporters argue the campus could drive economic growth and national security benefits.
Backed by political momentum and rising demand for AI infrastructure, Fermi is pressing ahead with construction and partnerships. Whether Project Matador can translate ambition into delivery remains a key test as competition intensifies in the global race to power next-generation AI systems.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is reshaping Australia’s labour market at a pace that has reignited anxiety about job security and skills. Experts say the speed and visibility of AI adoption have made its impact feel more immediate than previous technological shifts.
Since the public release of ChatGPT in late 2022, AI tools have rapidly moved from novelty to everyday workplace technology. Businesses are increasingly automating routine tasks, including through agentic AI systems that can execute workflows with limited human input.
Research from the HR Institute of Australia suggests the effects are mixed. While some entry-level roles have grown in the short term, analysts warn that clerical and administrative jobs remain highly exposed as automation expands across organisations.
Economic modelling indicates that AI could boost productivity and incomes if adoption is carefully managed, but may also cause short-term job displacement. Sectors with lower automation potential, including construction, care work, and hands-on services, are expected to absorb displaced workers.
Experts and unions say outcomes will depend on skills, policy choices, and governance. Australia’s National AI Plan aims to guide the transition, while researchers urge workers to upskill and use AI as a productivity tool rather than avoiding it.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
A Guardian investigation has found that Google’s AI Overviews have displayed false and misleading health information that could put people at risk of harm. The summaries, which appear at the top of search results, are generated using AI and are presented as reliable snapshots of key information.
The investigation identified multiple cases where Google’s AI summaries provided inaccurate medical advice. Examples included incorrect guidance for pancreatic cancer patients, misleading explanations of liver blood test results, and false information about women’s cancer screening.
Health experts warned that such errors could lead people to dismiss symptoms, delay treatment, or follow harmful advice. Some charities said the summaries lacked essential context and could mislead users during moments of anxiety or crisis.
Concerns were also raised about inconsistencies, with the same health queries producing different AI-generated answers at different times. Experts said this variability undermines trust and increases the risk that misinformation will influence health decisions.
Google said most AI Overviews are accurate and helpful, and that the company continually improves quality, particularly for health-related topics. It said action is taken when summaries misinterpret content or lack appropriate context.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
Chinese President Xi Jinping said 2025 marked a year of major breakthroughs for the country’s AI and semiconductor industries. In his New Year’s address, he said that Chinese technology firms had made significant progress in AI models and domestic chip development.
China’s AI sector gained global attention with the rise of DeepSeek. The company launched advanced models focused on reasoning and efficiency, drawing comparisons with leading US systems and triggering volatility in global technology markets.
Other Chinese firms also expanded their AI capabilities. Alibaba released new frontier models and pledged large-scale investment in cloud and AI infrastructure, while Huawei announced new computing technologies and AI chips to challenge dominant suppliers.
China’s progress prompted mixed international responses. Some European governments restricted the use of Chinese AI models over data security concerns, while US companies continued engaging with Chinese-linked AI firms through acquisitions and partnerships.
Looking ahead to 2026, China is expected to prioritise AI and semiconductors in its next five-year development plan. Analysts anticipate increased research funding, expanded infrastructure, and stronger support for emerging technology industries.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!
AI is rapidly becoming the starting point for many everyday activities, from planning and learning to shopping and decision-making. A new report by PYMNTS Intelligence suggests that AI is no longer just an added digital tool, but is increasingly replacing traditional entry points such as search engines and mobile apps.
The study shows that AI use in the United States has moved firmly into the mainstream, with more than 60 per cent of consumers using dedicated AI platforms over the past year. Younger users and frequent AI users are leading the shift, increasingly turning to AI first rather than using it to support existing online habits.
Researchers found that how people use AI matters as much as how often they use it. Heavy users rely on AI across many aspects of daily life, treating it as a general-purpose system, while lighter users remain cautious and limit AI to lower-risk tasks. Trust plays a decisive role, especially when it comes to sensitive areas such as finances and banking.
The report also points to changing patterns in online discovery. Consumers who use standalone AI platforms are more likely to abandon older methods entirely, while those encountering AI through search engines tend to blend it with familiar tools. That difference suggests that the design and context of AI services strongly influence user behaviour.
Looking ahead, the findings hint at how AI could reshape digital commerce. Many consumers say they would prefer to connect digital wallets directly to AI platforms for payments, signalling a potential shift in how intent turns into transactions. As AI becomes a common entry point to the digital world, businesses and financial institutions face growing pressure to adapt their systems to this new starting line.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
Face-to-face interviews and oral verification could become a routine part of third-level assessments under new recommendations aimed at addressing the improper use of AI. Institutions are being encouraged to redesign assessment methods to ensure student work is authentic.
The proposals are set out in new guidelines published by the Higher Education Authority (HEA) of Ireland, which regulates universities and other third-level institutions. The report argues that assessment systems must evolve to reflect the growing use of generative AI in education.
While encouraging institutions to embrace AI’s potential, the report stresses the need to ensure students are demonstrating genuine learning. Academics have raised concerns that AI-generated assignments are increasingly difficult to distinguish from original student work.
To address this, the report recommends redesigning assessments to prioritise student authorship and human judgement. Suggested measures include oral verification, process-based learning, and, where appropriate, a renewed reliance on written exams conducted without technology.
The authors also caution against relying on AI detection tools, arguing that integrity processes should be based on dialogue and evidence. They call for clearer policies, staff and student training, and safeguards around data use and equitable access to AI tools.
Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!