On July 8, 2025, Hugging Face launched its latest language model, SmolLM3, an advanced small language model (SLM) featuring just 3 billion parameters. This multilingual model supports six languages—English, French, Spanish, German, Italian, and Portuguese—and can handle long texts of up to 128,000 tokens, equivalent to about a 300-400-page book. SmolLM3 achieves outstanding performance, rivaling larger large-scale language models (LLMs) like GPT-4 due to its innovative three-stage learning training process. This model significantly outperformed comparable 3B models, showcasing its effectiveness in knowledge, reasoning, mathematics, and coding. Notably, SmolLM3 offers two interactive modes: /no_think for quick answers and /think for detailed reasoning, enhancing user interactivity. Hugging Face aims to share SmolLM3’s training process and dataset soon, encouraging community development and leveraging AI on feasible local setups. For more information, visit Hugging Face’s official blog here.