There are many models to choose from, and the available options can change frequently. In general, larger models with more parameters produce better results. For example, Gemma3 27B via Ollama has performed well in testing.
It is recommended to use Ollama models instead of the in-app models for better accuracy. Try a few different ones on challenging strings that include placeholders or special formatting to see which works best for you.
Before starting, make sure the model you choose can run on your machine. The model must be able to fit entirely into your system’s RAM, otherwise it will fail to load or run very slowly. As a rule of thumb, you need at least as much free RAM as the model size itself, plus extra overhead for smooth operation.