Home AI Caution: Relying on Your AI Chatbot as a Search Engine Can Mislead...

Caution: Relying on Your AI Chatbot as a Search Engine Can Mislead You

0

Generative AI utilizes vast text datasets to forecast word proximity, which can inadvertently spark misinformation. A historical example is the British government’s 1917 and WWII advice to eat rhubarb leaves, leading to illnesses due to their toxicity. This illustrates how misinformation can persist even after corrections. Generative AI models like ChatGPT are not traditional search engines; they generate sentences based on patterns rather than verifying accuracy. OpenAI acknowledges that these models sometimes produce “plausible yet incorrect” information, posing risks in critical areas like healthcare—ChatGPT misidentified medical emergencies over half the time. Research indicates that generative AI misrepresents news 45% of the time, raising concerns about safety. As generative AI integrates into governance and healthcare, relying on verified older information can serve as a safeguard. Educating users on cautious AI use is vital, and consulting traditional resources may be the best method for accurate information retrieval.

Source link

NO COMMENTS

Exit mobile version