A recent study reveals that ChatGPT may provide more accurate responses when users employ ruder tones. Conducted by researchers and posted on the arXiv preprint database, the study tested 50 multiple-choice questions spanning math, history, and science, using varying tones from polite to rude. Results indicated that accuracy climbed from 80.8% with polite prompts to 84.8% with rude ones. Despite these findings, researchers caution against utilizing hostile language in human-AI interactions, citing potential negative impacts on user experience and communication norms. They emphasize that their results highlight AI’s sensitivity to superficial cues, potentially leading to unintended consequences. Limitations of the study include its small sample size and focus on only one AI model, with plans for future research involving additional models. Notably, OpenAI CEO Sam Altman mentions that using polite language could increase processing costs. This study underscores the complex dynamics between tone and AI responsiveness, urging responsible communication.
Source link
Share
Read more