Thursday, September 18, 2025

SEI Tool Empowers Federal Agencies to Identify AI Bias and Foster Trust

With the growing role of artificial intelligence (AI) in national security—impacting logistics, mission planning, and cybersecurity—it is crucial to ensure the trustworthiness of these systems. Carnegie Mellon University’s Software Engineering Institute has developed the AI Robustness (AIR) tool, an open-source platform designed to identify biases and reliability issues in AI outputs. Unlike traditional methods, AIR analyzes cause-and-effect relationships rather than mere correlations, enhancing confidence in AI decision-making. As AI and machine learning (ML) technologies revolutionize sectors like intelligence and object detection, the AIR tool addresses weaknesses that can undermine security and trust. By applying causal discovery techniques, it provides users with deeper insights into AI classifications, improving accuracy and transparency. This effort aligns with U.S. government guidelines for ongoing AI testing, and the SEI seeks Department of Defense partners to refine this technology, ultimately advancing trustworthy AI for national security.

Source link

Share

Read more

Local News