Recent research from MIT Sloan reveals that advancements in large language models (LLMs) only account for half of performance gains in generative AI tasks. The other half comes from users optimizing their prompts—written instructions provided to AI systems. A study involving 1,900 participants demonstrated that those using advanced models, like DALL-E 3, generated images more similar to targets than those using earlier versions. Participants learned that better prompting significantly impacts results, emphasizing that skills beyond technical expertise are crucial. Notably, automatic prompt rewriting by AI led to a 58% decline in performance, indicating potential pitfalls with automation. For businesses, the key takeaway is that investing in human learning and user experimentation is essential for maximizing AI system efficiency. Strategies include prioritizing training, promoting iterative interfaces, and being cautious with automation, as these practices can help harness full potential from generative AI technologies and bridge skill gaps among users.
Source link