A report from the University of Graz highlights the risks of using artificial intelligence (AI) in Austria’s asylum processes. While AI tools, including mobile phone data analysis and automated translations, are intended to streamline case evaluations, researchers argue that their drawbacks outweigh potential benefits. Mobile phone analysis, employed to ascertain asylum seekers’ identities and backgrounds, is criticized for breaching privacy rights and lacking reliability. Similarly, automated translation tools can misinterpret meanings, leading to errors in communication that affect asylum decisions. The authors express concerns that these technologies, including facial recognition and proposed dialect recognition, may exacerbate discrimination and violate human dignity. Overall, the report cautions against implementing AI in asylum processing without stringent oversight, asserting that existing tools compromise fairness and human rights standards. As Austria’s Ministry of the Interior considers further AI applications, the authors warn that asylum seekers may become experimental subjects for technology that citizens would reject.
Source link

Share
Read more