Study explores AI’s role in future-proofing buildings

AI could help design buildings that are resilient to both climate extremes and infectious disease threats, according to new research. The study, conducted in collaboration with Charles Darwin University, examines the application of AI in smart buildings, with a focus on energy efficiency and management.

Buildings account for over two-thirds of global carbon emissions and energy consumption, but reducing consumption remains challenging and costly. The study highlights how AI can enhance ventilation and thermal comfort, overcoming the limitations of static HVAC systems that impact sustainability and health.

Researchers propose adaptive thermal control systems that respond in real-time to occupancy, outdoor conditions, and internal heat. Machine learning can optimise temperature and airflow to balance comfort, energy efficiency, and infection control.

A new framework enables designers and facility managers to simulate thermal scenarios and assess their impact on the risk of airborne transmission. It is modular and adaptable to different building types, offering a quantitative basis for future regulatory standards.

The study was conducted with lead author Mohammadreza Haghighat from the University of Tehran and CDU’s Ehsan Mohammadi Savadkoohi. Future work will integrate real-time sensor data to strengthen building resilience against future climate and health threats.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Could AI win a Nobel Prize? Experts debate the possibility

AI is starting to make inroads into scientific discovery. In recent years, AI systems have analysed data, designed experiments, and even proposed hypotheses and behaviours once thought to be uniquely human.

Some researchers now argue that AI could compete with leading scientists, conceivably worthy of a Nobel Prize in a few decades. The ambition invites provocative questions: Can a machine be an author or laureate? What criteria would apply? Would human oversight remain essential?

Sceptics argue that AI lacks consciousness, intentionality or moral agency, all hallmarks of great scientific insight. They caution that the machine’s contributions are derivative, built on human data, models and frameworks. Others contend that denying AI recognition blocks a future where hybrid human-machine teams deliver breakthroughs.

Meanwhile, mechanisms for attributing credit are also under scrutiny. Would the institution or the engineers who built the AI deserve the credit, or the AI itself? The article notes existing examples: AIs have already co-authored papers and databases in genetics or materials science. However, instituting them as Nobel candidates demands shifting philosophical and institutional norms.

As AI systems achieve deeper autonomy, the debate over their role in science and whether they merit high honours will only intensify. The Nobel Prize, a symbolic instrument in the science ecosystem, may evolve to include nonhuman actors if the community permits it.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

AI maps over 1,300 mouse brain subregions with unprecedented precision

Researchers at UCSF and the Allen Institute have created one of the most detailed mouse brain maps. Their AI model, CellTransformer, identified over 1,300 brain regions and subregions, including previously uncharted areas. The findings were published in Nature Communications.

CellTransformer utilises spatial transcriptomics to define brain regions based on shared cellular patterns, rather than relying on expert annotation. Drawing city borders from building types reveals finer brain structures. This data-driven method provides unprecedented precision.

The model replicated known regions, such as the hippocampus, and revealed previously unknown subdivisions in the midbrain reticular nucleus. Researchers compared the leap from mapping continents to mapping states and cities. The tool provides a foundation for more targeted neuroscience studies.

Validation against the Allen Institute’s Common Coordinate Framework strongly aligned with expert-defined anatomy. The results gave researchers confidence in the biological relevance of the new subregions. Further studies will investigate their functions.

The model’s potential goes beyond neuroscience. Its methods can map other tissues, including cancers, by analysing large spatial transcriptomics datasets. However, this could support new medical research, helping uncover disease mechanisms and accelerate treatment development.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

New report finds IT leaders unprepared for evolving cyber threats

A new global survey by 11:11 Systems highlights growing concerns among IT leaders over cyber incident recovery. More than 800 senior IT professionals across North America, Europe, and the Asia Pacific report a rising strain from evolving threats, staffing gaps, and limited clean-room infrastructure.

Over 80% of respondents experienced at least one major cyberattack in the past year, with more than half facing multiple incidents. Nearly half see recovery planning complexity as their top challenge, while over 80% say their organisations are overconfident in their recovery capabilities.

The survey also reveals that 74% believe integrating AI could increase cyberattack vulnerability. Despite this, 96% plan to invest in cyber incident recovery within the next 12 months, underlining its growing importance in budget strategies.

The financial stakes are high. Over 80% of respondents reported spending at least six figures during just one hour of downtime, with the top 5% incurring losses of over one million dollars per hour. Yet 30% of businesses do not test their recovery plans annually, despite these risks.

11:11 Systems’ CTO Justin Giardina said organisations must adopt a proactive, AI-driven approach to recovery. He emphasised the importance of advanced platforms, secure clean rooms, and tailored expertise to enhance cyber resilience and expedite recovery after incidents.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Employees embrace AI but face major training and trust gaps

SnapLogic has published new research highlighting how AI adoption reshapes daily work across industries while exposing trust, training, and leadership strategy gaps.

The study finds that 78% of employees already use AI in their roles, with half using autonomous AI agents. Workers interact with AI almost daily and save over three hours per week. However, 94% say they face barriers to practical use, with concerns over data privacy and security topping the list.

Based on a survey of 3,000 US, UK, and German employees, the research finds widespread but uneven AI support. Training is a significant gap, with only 63% receiving company-led education. Many rely on trial and error, and managers are more likely to be trained than non-managers.

Generational and hierarchical differences are also evident. Seventy percent of managers express strong confidence in AI, compared with 43% of non-managers. Half believe they will be managed by AI agents rather than people in the future, and many expect to be handled by AI themselves.

SnapLogic’s CTO, Jeremiah Stone, says the agile enterprise is about easing workloads and sparking creativity, not replacing people. The findings underscore the need for companies to align strategy, training, and trust to realise AI’s potential in the workplace fully.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI tools reshape how Gen Z approaches buying cars

Gen Z drivers are increasingly turning to AI tools to help them decide which car to buy. A new Motor Ombudsman survey of 1,100 UK drivers finds that over one in four Gen Z drivers would rely on AI guidance when purchasing a vehicle, compared with 12% of Gen X drivers and just 6% of Baby Boomers.

Younger drivers view AI as a neutral and judgment-free resource. Nearly two-thirds say it helps them make better decisions, while over half appreciate the ability to ask unlimited questions. Many see AI as a fast and convenient way to access information during car-buying.

Three-quarters of Gen Z respondents believe AI could help them estimate price ranges, while 60% think it would improve their haggling skills. Around four in ten say it would help them assess affordability and running costs, a sentiment less common among Millennials and Gen Xers.

Confidence levels also vary across generations. About 86% of Gen Z and 87% of Millennials say they would feel more assured if they used AI before making a purchase, compared with 39% of Gen Xers and 40% of Boomers, many of whom remain indifferent to its influence.

Almost half of drivers say they would take AI-generated information at face value. Gen Z is the most trusting, while older generations remain cautious. The Motor Ombudsman urges buyers to treat AI as a complement to trusted research and retailer checks.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Policy hackathon shapes OpenAI proposals ahead of EU AI strategy

OpenAI has published 20 policy proposals to speed up AI adoption across the EU. Released shortly before the European Commission’s Apply AI Strategy, the report outlines practical steps for member states, businesses, and the public sector to bridge the gap between ambition and deployment.

The proposals originate from Hacktivate AI, a Brussels hackathon with 65 participants from EU institutions, governments, industry, and academia. They focus on workforce retraining, SME support, regulatory harmonisation, and public sector collaboration, highlighting OpenAI’s growing policy role in Europe.

Key ideas include Individual AI Learning Accounts to support workers, an AI Champions Network to mobilise SMEs, and a European GovAI Hub to share resources with public institutions. OpenAI’s Martin Signoux said the goal was to bridge the divide between strategy and action.

Europe already represents a major market for OpenAI tools, with widespread use among developers and enterprises, including Sanofi, Parloa, and Pigment. Yet adoption remains uneven, with IT and finance leading, manufacturing catching up, and other sectors lagging behind, exposing a widening digital divide.

The European Commission is expected to unveil its Apply AI Strategy within days. OpenAI’s proposals act as a direct contribution to the policy debate, complementing previous initiatives such as its EU Economic Blueprint and partnerships with governments in Germany and Greece.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

AI-designed proteins surpass nature in genome editing

Researchers in Barcelona have developed synthetic proteins using generative AI that outperform natural ones at editing the human genome. The breakthrough, published in Nature Biotechnology, could transform treatments for cancer and rare genetic diseases.

The team from Integra Therapeutics, UPF and the CRG screened over 31,000 eukaryotic genomes, identifying more than 13,000 previously unknown PiggyBac transposase sequences. Experimental tests revealed ten active variants, two matching or exceeding current lab-optimised versions.

In the next phase, scientists trained a protein large language model on the newly discovered sequences to create entirely new proteins with improved genome-editing precision. The AI-generated enzymes worked efficiently in human T cells and proved compatible with Integra’s FiCAT gene-editing platform.

The Spanish researchers say the approach shows AI can expand biology’s own toolkit. By understanding the molecular ‘grammar’ of proteins, the model produced novel sequences that remain structurally and functionally sound.

Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot

Breach at third-party support provider exposes Discord user data

Discord has disclosed a security incident after a third-party customer service provider was compromised. The breach exposed personal data from users who contacted Discord’s support and Trust & Safety teams.

An unauthorised party accessed the provider’s ticketing system and targeted user data in an extortion attempt. Discord revoked access, launched an investigation with forensic experts, and notified law enforcement. Impacted users will be contacted via official email.

Compromised information may include usernames, contact details, partial billing data, IP addresses, customer service messages, and limited government-ID images. Passwords, authentication data, and full credit card numbers were not affected.

Discord has notified data protection authorities and strengthened security controls for third-party providers. It has also reviewed threat detection systems to prevent similar incidents.

The company urges affected users to remain vigilant against suspicious messages. Service agents are available to answer questions and provide additional support.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!

Labour market remains stable despite rapid AI adoption

Surveys show persistent anxiety about AI-driven job losses. Nearly three years after ChatGPT’s launch, labour data indicate that these fears have not materialised. Researchers examined shifts in the US occupational mix since late 2022, comparing them to earlier technological transitions.

Their analysis found that shifts in job composition have been modest, resembling the gradual changes seen during the rise of computers and the internet. The overall pace of occupational change has not accelerated substantially, suggesting that widespread job losses due to AI have not yet occurred.

Industry-level data shows limited impact. High-exposure sectors, such as Information and Professional Services, have seen shifts, but many predate the introduction of ChatGPT. Overall, labour market volatility remains below the levels of historical periods of major change.

To better gauge AI’s impact, the study compared OpenAI’s exposure data with Anthropic’s usage data from Claude. The two show limited correlation, indicating that high exposure does not always imply widespread use, especially outside of software and quantitative roles.

Researchers caution that significant labour effects may take longer to emerge, as seen with past technologies. They argue that transparent, comprehensive usage data from major AI providers will be essential to monitor real impacts over time.

Would you like to learn more about AI, tech, and digital diplomacy? If so, ask our Diplo chatbot!