MiniMax, based in Shanghai, has launched its open-source reasoning model, MiniMax-M1, which competes with Chinese rival DeepSeek and major US companies like OpenAI and Google. Released under the Apache license, MiniMax-M1 is fully open source, unlike Meta’s Llama and DeepSeek’s partial offerings. The model features a remarkable 1 million-token context window and can generate 80,000 tokens in response, outperforming DeepSeek’s capabilities while trailing behind OpenAI’s o3 model.
MiniMax claims that M1’s performance in benchmarks is competitive with others like OpenAI o3 and Gemini 2.5 Pro. The firm highlights its Lightning Attention mechanism, which enhances computational efficiency for long-context inputs, requiring significantly less computing power compared to competitors. With backing from Alibaba, Tencent, and IDG Capital, MiniMax asserts its model’s cost-effectiveness, having completed its reinforcement learning phase at a fraction of anticipated costs. The source code is available on GitHub for independent verification of performance.
Source link