In the realm of artificial intelligence, a seemingly simple human action like reaching across a table to pass the salt reveals the astonishing complexity of the human brain. Such a gesture involves more than just responding to a request; it relies on an intricate integration of bodily knowledge, spatial awareness, tactile familiarity, and social contextual understanding. This seamless coordination between body and mind, honed over a lifetime of embodied experience, is something that current AI systems profoundly lack. A groundbreaking study from UCLA Health now highlights this stark contrast, arguing that the absence of “internal embodiment” in AI not only limits performance but poses fundamental risks to safety and trustworthiness.
The study, led by postdoctoral fellow Akila Kadambi and senior author Dr. Marco Iacoboni, underscores two critical components that define internal embodiment: the physical body’s ongoing interaction with its environment, and a continuous self-monitoring of internal states such as fatigue, uncertainty, or physiological need. Unlike humans, advanced multimodal large language models today—those powering platforms like ChatGPT or Google’s Gemini—process vast amounts of text, images, and video without ever possessing a true experiential connection to the world or themselves. Where human cognition is grounded in sensorimotor and biological feedback loops, these AI systems remain anchored solely to statistical patterns, devoid of the embodied ‘self-awareness’ that shapes human decision-making and behavior.
This absence of an internal regulatory mechanism can have profound consequences. The UCLA team highlights the failure of leading AI models to correctly interpret a point-light display, an experimental setup where mere dots arranged to simulate human motion are almost effortlessly recognized by infants and adults alike as depicting a human form. Several AI systems misclassified the image as a constellation of stars, and even minor rotations caused further breakdowns in recognition. This demonstrates how disembodied pattern-matching without contextual bodily experience results in brittle, unreliable understanding—a gap that cannot simply be bridged by increasing training data or model complexity.
The researchers articulate a nuanced distinction between what they call “external embodiment” and “internal embodiment.” External embodiment refers to a system’s capacity to perceive its surroundings, plan actions, and respond to real-world feedback—a growing area of focus in current AI research. However, without internal embodiment, which includes the system’s constant monitoring of its own internal “states” and reflective processes, AI models lack self-regulatory capabilities fundamental to robust and adaptive intelligence. Whereas humans organically adjust their behavior based on fatigue, attention, stress, or social cues, AI remains unable to do so, generating outputs regardless of context or internal coherence.
Bridging this gap is not merely a philosophical exercise but a pressing technical challenge. The UCLA team proposes creating functional analogues of internal embodiment that do not necessarily replicate human biology in detail but serve to model key variables such as uncertainty, cognitive load, or confidence. These internal state variables would persistently influence an AI’s output, enabling the system to regulate itself adaptively over time. Such mechanisms could serve as intrinsic safeguards, mitigating risks like overconfidence, susceptibility to manipulation, and inconsistent behavior that currently afflict AI when deployed in consequential domains such as healthcare, autonomous vehicles, or legal decision support.
An equally vital aspect of this emerging framework involves devising new benchmarks to assess internal embodiment in AI systems. Traditional AI evaluations predominantly measure external competencies like object recognition, navigation, or task completion, ignoring whether models possess introspective states capable of sustaining stability or pro-social behavior over time. The UCLA team argues that tests designed to probe these inner dynamics are crucial for advancing responsible AI development. By assessing an AI’s ability to maintain consistent behavior under internal “stress” conditions and to align with human values emergent from shared internal representations, researchers can better ensure AI’s safety and ethical integration.
This work challenges prevailing assumptions in AI research by insisting that true alignment with human cognition requires embracing vulnerability and internal self-regulation in artificial agents. Professor Marco Iacoboni emphasizes that without mechanisms akin to human fatigue or uncertainty, AI systems can only simulate human-like behavior superficially, failing the deeper test of genuine alignment. This insight suggests that future AI designs should incorporate computational analogues of biological feedback loops—not only to improve performance but to embed a kind of moral and pragmatic compass within artificial minds.
Implementing such systems calls for an interdisciplinary approach, blending advances in neuroscience, cognitive science, robotics, and machine learning. It involves understanding how biological organisms dynamically tune behavior based on internal monitoring and translating those principles into algorithmic forms suitable for artificial agents. This could involve leveraging recurrent neural architectures capable of maintaining internal state representations or developing new types of feedback control systems that dynamically modulate learning and response based on real-time internal metrics.
Crucially, the notion of internal embodiment shifts the conversation about AI safety from purely external controls and constraints toward designing agents that are intrinsically self-aware and self-regulating. This could reduce reliance on brittle, externally imposed guardrails and instead foster more resilient, autonomous systems capable of nuanced judgment and adaptation. Such advances are particularly urgent as AI technologies rapidly proliferate into sensitive sectors where errors can have serious ethical and societal consequences.
The UCLA Health study thus represents a seminal rethinking of embodiment in artificial intelligence. It argues that without internal embodiment, AI will remain confined to shallow mimicry rather than true understanding and responsibility. The dual-embodiment framework proposed invites the research community to embrace both external interaction and internal self-monitoring as jointly necessary pillars of future AI design, marking a critical frontier for achieving intelligence that genuinely resonates with human experience.
Looking ahead, the integration of internal embodiment principles promises not only enhanced AI performance but the emergence of smarter, safer, and more human-aligned technologies. Such AI could better appreciate the subtleties of human communication, anticipate contextual needs, and behave consistently under complex social and environmental conditions. This paradigm heralds a transformative vision where AI systems are no longer disembodied tools but embodied agents with a palpable sense of “self” and responsibility.
By reorienting the field toward internal embodiment, the UCLA research team has illuminated a path, inviting engineers, ethicists, and scientists to collaboratively pioneer a new generation of AI that transcends mere pattern recognition and statistical mimicry. The future of artificial intelligence, they contend, hinges on building machines that intrinsically know themselves as well as the world—a profound leap not yet realized but essential for the next era of cognitive computing.
Subject of Research: Not applicable
Article Title: Embodiment in multimodal large language models
News Publication Date: 1-Apr-2026
Keywords
Artificial intelligence, Artificial consciousness, Machine learning, Computer science, Evolutionary robotics, Psychological science, Psychiatry
Tags: AI and human contextual understandingAI safety and trustworthinessAI spatial awareness challengesbody gap in AIembodied cognition vs AIhuman experience in AI systemsinternal embodiment in artificial intelligencemultimodal large language models limitationsphysiological self-monitoring in humansrisks of non-embodied AIsensorimotor integration in AIUCLA AI research on embodiment



