When discussing generative AI, foundational models like OpenAI’s ChatGPT and Google’s Gemini dominate conversations. These large models, pretrained on vast internet datasets, facilitate tasks through natural language. However, they often produce inaccurate information, termed “hallucinations.” For instance, Google’s Gemini hallucinates 1.1% of the time, while Mistral’s Large 2 goes up to 4.1%. Such inaccuracies are unsuitable for government agencies, where mission-critical tasks demand precision. To address this, Retrieval-Augmented Generation (RAG) models limit data access to curated, context-specific information, enhancing reliability. Effective deployment of RAG models hinges on IT collaboration, a broad understanding of relevant data—including contextual metadata—and ensuring traceability and auditability of AI outputs. Public servants must have robust citation capabilities to verify AI-generated information. As AI transforms government operations, tailored solutions like those offered by Civic Roundtable are essential for improving service delivery while maintaining accuracy and accountability.
Source link