Liquid AI has launched LFM2, an innovative model that surpasses traditional transformer-based architectures in efficiency and generalization, particularly in long-context and resource-limited scenarios. Open-sourced for transparency, LFM2’s weights are accessible on Hugging Face and within the Liquid Playground, and will soon be integrated into Liquid AI’s Edge AI platform and an iOS app. Co-founder Ramin Hasani emphasizes the model’s design for on-device deployment, enhancing generative and agentic AI applications. LFM2 boasts 200% higher throughput and lower latency compared to existing models like Qwen3 and Gemma 3n Matformer. With a 300% boost in training efficiency, it presents a cost-effective solution for developing robust AI systems. Transitioning large generative models from the cloud to local devices ensures fast responses, offline capabilities, and privacy—crucial for various applications, including smart gadgets and robotics. This strategic shift could elevate the total addressable market for compact, private foundation models towards $1 trillion by 2035.
Source link

Share
Read more