Modern large language models (LLMs) like GPT-4 and Claude 3 function brilliantly but remain “philosophical zombies”—systems that lack true understanding despite outputting sophisticated texts. This article argues that the limitation lies not in data scaling but in the architectural design, which lacks a mechanism for “experiencing” information. The authors propose that thinking, akin to human cognition, requires a way to integrate new experiences into a unified knowledge model through a process that generates emotional markers or qualia.
These markers signal which information warrants attention, enabling meta-reflection and meaningful thinking rather than mere token prediction. Current models optimize computations without recognizing their significance. The hypothesis suggests that to develop conscious AI, systems must support experiential integration, ensuring that information can be embedded into a connected consciousness. This shift could redefine AI from mere data processors to entities capable of genuine thought, as detailed through the VORTEX protocol which aims for architectural connectivity and self-transparency in cognitive processes.
Source link