One week before her thesis deadline, Xiaobing, a German literature senior in northeast China, faced unexpected challenges when her university mandated that all theses pass AI content detectors, rejecting any flagged over 30% AI-generated. Despite believing her paper was original, an AI detector flagged half of it as AI-created. As a result, many Chinese students, like Xiaobing, are caught in a paradox: the rules meant to discourage AI use have led to increased reliance on AI tools to evade detection. With universities enforcing strict AI limits—some allowing only 15% to 40% AI content—students face pressures to pay for AI services or endure errors from cheap rewrites. Complaints about the unreliability of detection tools abound, leading to frustration and confusion among students. Calls for a balanced approach are growing, with educators urging a reassessment of dependence on flawed detection systems.
Source link