Summary of ICML’s Strict Stance on AI in Peer Reviews
The 2026 International Conference on Machine Learning (ICML) is taking a firm stance against unethical AI use in peer reviews. A total of 497 papers, approximately 2% of submissions, were rejected due to violations of AI-use policies.
Key points:
- Watermarking System: Organizers employed a watermarking strategy to trace AI-generated peer reviews, ensuring compliance with their regulations.
- Reciprocal Review Policy: Authors must review papers as part of the submission process, promoting accountability.
- Community Response: Many applauded ICML’s actions on social media, calling for similar measures in other conferences, while discussing the effectiveness of such policies.
Marie Soulière, an expert from Frontiers, emphasized the need for clear guidance on responsible AI use.
As researchers increasingly turn to AI for peer reviews, the debate continues. Will creating specialized review streams encourage ethical practices?
Join the conversation! Share your thoughts on AI in peer review!