AI safety may hinge on missing human body awareness
Lack of internal bodily awareness in AI systems, a feature key to human cognition and behaviour regulation, may affect future AI safety and alignment.
A study from UCLA Health suggests that modern AI systems lack a fundamental aspect of human cognition linked to bodily experience, a gap that may have implications for safety and alignment with human behaviour.
Researchers describe this missing element as the absence of ‘internal embodiment’, where humans continuously regulate behaviour through bodily signals. While current AI systems can process and describe the physical world, they do not experience internal states such as fatigue, uncertainty, or physical need.
According to the study published in Neuron, this absence limits how AI systems interpret and respond to situations compared with humans, whose cognition is shaped by continuous interaction between brain and body.
The research distinguishes between external interaction and internal self-monitoring, arguing that most AI development focuses only on the former. Without internal regulatory signals, systems may lack natural constraints that guide consistency, caution, and awareness of uncertainty in decision-making.
Researchers propose a ‘dual-embodiment’ framework introducing internal state tracking in AI systems, alongside new benchmarks to assess stability and uncertainty.
AI safety may require more than improved external performance, highlighting the importance of internal regulatory mechanisms that could help systems behave more consistently, predictably, and in line with human expectations in real-world use.
Would you like to learn more about AI, tech and digital diplomacy? If so, ask our Diplo chatbot!
