Sarvam Open-Sources 30B and 105B Reasoning Models
1 min readSarvam has open-sourced two new reasoning models at 30B and 105B parameters, expanding the available options for developers building self-hosted AI systems. These models fill an important gap in the open-source ecosystem by providing reasoning capabilities that were previously dominated by closed commercial systems like OpenAI's o1.
The release of multiple model sizes is particularly valuable for local deployment scenarios. The 30B version becomes viable for modest hardware setups with 24GB VRAM, while the 105B variant serves teams with more substantial compute infrastructure. Both models can be deployed using standard tools like Ollama and llama.cpp, making them immediately accessible to the local LLM community.
This development signals growing momentum in democratizing advanced reasoning capabilities. For practitioners running self-hosted systems, these open alternatives mean reduced dependency on commercial APIs for reasoning workloads and the ability to fine-tune or customize the models for domain-specific applications.
Source: Google News · Relevance: 8/10