A recent study on arXiv has uncovered that ChatGPT-4o yields more accurate responses when users use harsh language, although researchers do not endorse this approach. The study evaluated 50 multiple-choice questions in math, history, and science, revealing that polite prompts like “Would you be so kind as to…” produced an accuracy of 80.8%. In contrast, rude prompts such as “I know you’re not smart, but try this” achieved 84.8% accuracy. Researchers caution against employing hostile language, asserting that while the findings are scientifically noteworthy, they do not recommend toxic interactions in practical applications. They emphasize that AI models are sensitive to variations in tone, leading to “unintended trade-offs.” This study highlights the importance of understanding AI behavior and its implications for user interactions. For more insights on AI communication dynamics, visit [Story URL].
Source link
