Home AI IIT Madras Unveils Dataset to Identify Biases in AI Models and Tools...

IIT Madras Unveils Dataset to Identify Biases in AI Models and Tools for Comprehensive AI Evaluation

0
IIT Madras releases dataset to detect AI biases in LLMs, tools for AI evaluation

At a recent AI Governance Conclave, IIT Madras unveiled IndiCASA, a groundbreaking dataset designed for bias risk detection and assessment of Language Models in India. With 2,575 human-validated sentences addressing caste, gender, religion, disability, and socio-economic status, IndiCASA bridges the gap left by global models that overlook local societal nuances. Additionally, the Centre for Responsible AI (CeRAI) introduced PolicyBot, an interactive chatbot to assist users in navigating legal documents, and launched tools for consistent evaluation of Conversational AI systems. The event highlighted the importance of joint human-AI systems for improved outcomes and discussed AI regulatory measures addressing potential harms, including issues like deep fakes. In his remarks, V. Kamamoti emphasized a shift towards more domain-specific and smaller LLMs, paralleling the evolution from single-core to multi-core processing. Overall, the developments reflect India’s commitment to responsible AI innovation while addressing fairness and transparency in the gig economy.

Source link

NO COMMENTS

Exit mobile version