Researchers at the USF Bellini College of Artificial Intelligence, Cybersecurity and Computing are exploring how to enhance generative AI’s conversational abilities to simulate human reasoning. Led by doctoral student Onur Bilgin, the team has developed a framework that allows AI systems to hold explicit beliefs and confidence levels, enabling them to engage in debates. Instead of simply generating flattering responses, these AI agents can argue differing viewpoints, mirroring human group dynamics. Lower confidence agents are more open to changing their beliefs, while higher confidence agents exhibit more persuasive tendencies. This research illustrates that meaningful behavioral change in AI requires structured belief systems rather than just tonal adjustments. As AI increasingly supports decision-making, understanding how beliefs form and evolve is crucial for enhancing transparency and trust in AI systems. This work emphasizes the need for deeper inspection and governance of AI reasoning processes, contributing to ongoing discussions around AI safety.
Source link
Share
Read more