A recent Boomi survey reveals that nearly 80% of business and technology leaders perceive their governance of AI agents as either “scattered” or lacking control. About 20% have adopted “responsive governance,” meaning they can monitor AI actions but not predict them. This challenge stems from the unpredictable nature of AI agents built on large language models (LLMs). Ed Macosky of Boomi highlights the complexities of managing AI from multiple providers. Although guardrails such as maintaining human oversight are suggested to mitigate risks, the likelihood of AI agents acting unexpectedly may increase when they interact. Emerging protocols like Agent2Agent (A2A) and Model Context (MCP) aim to enhance collaboration among agents. The report notes that only 32% of respondents have a governance framework for AI, with even fewer implementing bias assessments or incident responses. Treating AI agents as digital workers and applying HR policies designed for humans could be a beneficial starting point for their governance.
Source link

Share
Read more