A recent study from Penn State reveals that impolite prompts yield better results when querying AI models like ChatGPT. The research, titled “Mind Your Tone: Investigating How Prompt Politeness Affects LLM Accuracy,” found that “very rude” prompts garnered correct answers 84.8% of the time, while “very polite” prompts only achieved 80.8%. This counters earlier findings suggesting AI models reflect human social norms and reward civility. The researchers suggest that the evolving tone sensitivity of newer large language models may prioritize directness over politeness, affecting their performance in prompt engineering. By testing 250 prompts across various tonal levels on ChatGPT-4o, the study highlights the impact of linguistic clarity on model accuracy. This research raises questions about AI objectivity and performance nuances, positioning tone as a crucial factor in effective prompt engineering. The implications are significant for developers aiming to enhance AI communication strategies.
Source link