A new AI system from the University of Colorado analyzes thousands of open-access journals to identify potentially predatory publications that undermine scientific credibility. Targeting researchers globally, these questionable journals often impose high publishing fees without proper peer review. The AI evaluated over 15,200 journals, flagging over 1,000 as suspicious, providing an efficient screening tool for researchers. While the system is not foolproof—human experts are essential for final assessments—it serves as a valuable first line of defense. The AI’s design aims for transparency, highlighting indicators of legitimacy, such as editorial board composition and grammatical accuracy. This initiative tackles the rising concern of predatory publishing, ensuring trust in scientific discourse. By enhancing the vetting process, the AI acts as a “firewall for science,” essential for maintaining research integrity and protecting against misinformation. Researchers anticipate making the tool accessible to universities and publishing houses soon.
Source link

Share
Read more