Ollama
Note
캐러티(Careti) 기준 문서입니다. Careti v3.38.1 머지본을 따르며, 캐러티 전용 정책(지원 로컬 런타임, 인증/라우팅, 모델 제한)이 있을 경우 본문에서 <Note>로 표시합니다.
Prerequisites
- Windows, macOS, or Linux computer
- Careti installed in VS Code
Setup Steps
1. Install Ollama
- Visit ollama.com
- Download and install for your operating system
2. Choose and Download a Model
-
Browse models at ollama.com/search
-
Select model and copy command:
ollama run [model-name]
-
Open your Terminal and run the command:
-
Example:
ollama run llama2
-
Your model is now ready to use within Careti.
3. Configure Careti
Open VS Code and configure Careti:
- Click the Careti settings icon
- Select "Ollama" as your API provider
- Base URL:
http://localhost:11434/(default, usually no need to change) - Select your model from the dropdown
Recommended Models
For the best experience with Careti, use Qwen3 Coder 30B. This model provides strong coding capabilities and reliable tool use for local development.
To download it:
ollama run qwen3-coder-30b
Other capable models include:
mistral-small- Good balance of performance and speeddevstral-small- Optimized for coding tasks
Important Notes
- Start Ollama before using with Careti
- Keep Ollama running in background
- First model download may take several minutes
Enable Compact Prompts
For better performance with local models, enable compact prompts in Careti settings. This reduces the prompt size by 90% while maintaining core functionality.
Navigate to Careti Settings → Features → Use Compact Prompt and toggle it on.
Troubleshooting
If Careti can't connect to Ollama:
- Verify Ollama is running
- Check base URL is correct
- Ensure model is downloaded
Need more info? Read the Ollama Docs.
%20(1)%20(1).png)
.gif)
.gif)
